In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .
In the Linux kernel, the following vulnerability has been resolved:nfsd: fix RELEASE_LOCKOWNERThe test on so_count in nfsd4_release_lockowner() is nonsense andharmful. Revert to using check_for_locks(), changing that to not sleep.First: harmful.As is documented in the kdoc comment for nfsd4_release_lockowner(), thetest on so_count can transiently return a false positive resulting in areturn of NFS4ERR_LOCKS_HELD when in fact no locks are held. This isclearly a protocol violation and with the Linux NFS client it can causeincorrect behaviour.If RELEASE_LOCKOWNER is sent while some other thread is stillprocessing a LOCK request which failed because, at the time that requestwas received, the given owner held a conflicting lock, then the nfsdthread processing that LOCK request can hold a reference (conflock) tothe lock owner that causes nfsd4_release_lockowner() to return anincorrect error.The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because itnever sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, soit knows that the error is impossible. It assumes the lock owner was infact released so it feels free to use the same lock owner identifier insome later locking request.When it does reuse a lock owner identifier for which a previous RELEASEfailed, it will naturally use a lock_seqid of zero. However the server,which didn t release the lock owner, will expect a larger lock_seqid andso will respond with NFS4ERR_BAD_SEQID.So clearly it is harmful to allow a false positive, which testingso_count allows.The test is nonsense because ... well... it doesn t mean anything.so_count is the sum of three different counts.1/ the set of states listed on so_stateids2/ the set of active vfs locks owned by any of those states3/ various transient counts such as for conflicting locks.When it is tested against 2 it is clear that one of these is thetransient reference obtained by find_lockowner_str_locked(). It is notclear what the other one is expected to be.In practice, the count is often 2 because there is precisely one stateon so_stateids. If there were more, this would fail.In my testing I see two circumstances when RELEASE_LOCKOWNER is called.In one case, CLOSE is called before RELEASE_LOCKOWNER. That results inall the lock states being removed, and so the lockowner being discarded(it is removed when there are no more references which usually happenswhen the lock state is discarded). When nfsd4_release_lockowner() findsthat the lock owner doesn t exist, it returns success.The other case shows an so_count of 2 and precisely one state listedin so_stateid. It appears that the Linux client uses a separate lockowner for each file resulting in one lock state per lock owner, so thistest on 2 is safe. For another client it might not be safe.So this patch changes check_for_locks() to use the (newish)find_any_file_locked() so that it doesn t take a reference on thenfs4_file and so never calls nfsd_file_put(), and so never sleeps. Withthis check is it safe to restore the use of check_for_locks() ratherthan testing so_count against the mysterious 2 .