Because it would sound horrible and it still wouldn't match the movement of the lips. That's why.
Not if it's done perfectly.
Which is to say, so if they got the lips to match up with the audio dubbing that their planning to do, try to think about why it's still not going to look or sound right (some of the reasons have already been given in this thread. The dubs, at least the movies coming into the US, often have poorly chosen voices and poor performances. But even then, the voices often sound wrong). But, as alluded to with the previous point that this tech sounds like it's working in some very select test cases, think about all the visual cues we take with line delivery (emphatic body movement, bobs of the head while yelling, hell, even how the body is breathing sometimes), that are still going to be visually dissonant, and I can't wonder if altering the visuals so the lips match will make that seem even more mentally dissonant (hence the uncanny valley comment, I made). This is what I was thinking when I contemplated, if they took a favorite foreign movie of mine, and recreated the audio based on the actor's voice, but in English, with the output coached by a voice director to get the AI to give a similar delivery to the original performance, and *then* they matched the lips to the lines, would I be interested in watching that? Not even necessarily in place of it, but could that be an experience that I've missed from not knowing the language? And I admit there'd be some curiosity - maybe even just to watch it once. But thinking about it, I just don't think the lip syncing would work in too many scenes. Maybe some of them, but probably not most of them; not without fundamentally altering the entire image of the actor as they're talking. Stuff well beyond the mouth.