I feel there is a difference when entering notes in the score/pianoroll, or when playing them. However, in the end I spend a lot of time manually quantizing what I played out of the grid live.

I'm currently working in Dorico for things that have to be precise on the score, and I like how it gives a good foundation to start humanizing things from its humanizing algorithms. Everything is transparent, so you can see what the machine did, and you can adjust it to your taste.

I think a mix of both may work. If a part is not sounding human enough, play it again live, or add a layer played live. This seems to me to offer the best of the two worlds.