Finding "AI" inaccuracies is the least surprising thing in the world. Given how LLMs work and their extremely well-documented failures to produce accurate information, the burden of proof lies squarely on "AI" vendors to show the accuracy of their products. To say that they have thus far failed to do so is... generous.
None of this snake oil should be touching news.
Take LGBTQ+ rights. Hell, even narrow down to trans rights. One side finds people like me inconvenient to talk about, the other wants us to be denied all medical care despite the disastrous effects that has on suicide rates (eso amongst trans kids). What is the "balanced" perspective there? What's the "center" view that you're striving to achieve using your stochastic parrot engines?
Even if LLMs did what you claim they did (they don't), your stated objectives are reprehensible and, if successful, will get people killed.