When there is space in data nodes and the file write to hdfs is still failing, the size of the block should be reduced.
The reason behind recommending to try a lower block size is that the write success depends on the completion of individual blocks and if the space is less on each data node but collectively high on the overall cluster combining all the data nodes free spaces, then a small block write to each data node has a higher chance of success in evading a full disc condition than when trying to write fewer large contiguous blocks.