FYI, a bunch of AI researchers on Twitter - including Mr. Narayanan from the screenshot above - have already pointed this out, but the sort of system they're describing both cannot "detect lying" (something which is not possible) and also is not actually intended to.
What it actually is is a way to establish a pseudo-scientific rationale for accusing customers of lying, so the company can refuse to pay out. This kind of AI generates "reports" containing solemn-looking language and diagrams, which the company can present to a court or a government audit as "scientific evidence" that they're not cheating people.
These "reports" have no more basis in fact than some guy saying "oh yeah I'm sure he was lying it was ALL over his face." But they LOOK like machine-generated data untouched by human bias, so people often take them seriously, as if they're something like an MRI image or security video. They're not.
(Recently, the law firm I work for has been getting spam trying to sell services and CLE courses which help you use fake AI data to bullshit jurors. It's never phrased that way - it's phrased a lot more like Lemonade's tweets above.)