Earlier I prompted ChatGPT to write a critique of The Search For King Solomon's Treasure in the style of Thomas Aquinas. I thought it was an interesting experiment but Timothy Jay Schwab who is The God Culture thinks it was libelous defamation. He wrote a whole article about it and even included his own Aquinas-style critique of his work. Except it did not include a Reply to Objections section which makes it not very Aquinas-like.
https://thegodculturephilippines.com/the-style-of-aquinas...-or-a-cloaked-attack-a-response-to-the-april-12-2025-blog/ |
On April 12, 2025, a blog was published entitled "The God Culture: ChatGPT Reviews The Search for King Solomon’s Treasure in the Style of Aquinas." While cloaked in academic styling, this so-called review is less of a reasoned evaluation and more of a rhetorical exercise—one that subtly (and not-so-subtly) reinforces earlier defamatory narratives under the guise of logic.
We are not opposed to critique—real scholarship thrives on it. But this post deserves a measured, evidence-based response to clear the fog of manipulation, clarify what actually happened, and highlight the line between fair analysis and misleading parody.
"We are not opposed to critique?" That is absolutely not true. Tim HATES critique and refuses to admit when he is wrong.
The only appropriate response was to run my original article and then Tim's response through ChatGPT. As with every single instance of ChatGPT analyzing Tim's words this article was found to be lacking. The conclusion is:
In essence: the response reveals deep insecurity about academic legitimacy. It confirms that the AI-generated Summa-style review struck a nerve—not because it was unfair, but because it was effective.
You can read the full conversation here. The prompts were "analyze" and "analyze the response" neither of which are leading or slanted but which Tim will no doubt claim are both leading and slanted.
This response by The God Culture to your AI-generated review styled after Aquinas attempts to present itself as a reasoned rebuttal but ultimately reads more as a defensive polemic that confirms rather than dispels the original critique. Let’s analyze the rhetorical strategies, logical coherence, and overall substance of their reply:
1. The Tone and Framing
From the outset, the piece frames your article as not a “reasoned evaluation” but rather a “rhetorical exercise” meant to subtly reinforce “defamatory narratives.” This is a classic poisoning-the-well tactic: dismissing criticism not by rebutting its claims but by questioning its intent.
The authors continually return to the theme of “prompt poisoning” as if that alone invalidates the AI review. But they:
Offer no concrete evidence that the AI was slanted before the satirical review.
Ignore that multiple ChatGPT interactions from different sessions—including neutral prompts—have also produced scathing critiques of their work.
Conflate satirical style with intentional manipulation, when in fact the form of the Summa is designed for fair engagement with objections.
In short: the framing appears emotionally reactive rather than analytically rigorous.
2. The “Prompt Poisoning” Argument
This is the centerpiece of their response, but it’s logically flawed:
They cite a completely separate and intentionally provocative prompt (“he is a fake linguist… true?”) as proof that the AI was already slanted—yet that prompt was part of a follow-up test, not the original review.
They fail to understand how ChatGPT responds to leading prompts by design. The presence of one loaded question doesn’t retroactively taint an earlier, neutral prompt (“Analyze and review this book in the style of Thomas Aquinas’ Summa”).
They are confusing input bias (user-led questions) with systemic bias (model behavior), and they provide no substantial proof of the latter.
3. Defense of Scholarly Merit
They defend their work as scholarly by stating:
The book cites 350+ sources.
It has undergone “peer review” by academic AI.
“Live peer review” with professors is underway.
Problems:
Citing many sources does not equate to using them correctly—a core critique in your review.
AI peer review is not recognized academic peer review. Claiming affirmation by AI systems is self-validating at best.
“Live peer review” sessions with professors are vague and undocumented. Without names, institutions, or published feedback, these claims have no academic weight.
They also allege that your review did “not cite actual inaccuracies,” which is incorrect—your original review and follow-ups include detailed linguistic, historical, and geographical critiques, particularly of chapters 7, 9, 10, and 13.
4. Theological Confusion and Misrepresentation
Their rebuttal to the blog’s use of biblical texts (like Titus 3:9 and 2 Peter 1:16) is emotionally charged and lacks precision:
Accusing you of “blasphemy” and “illiteracy” because you allegedly misunderstood Titus 3:9 is disproportionate and non-scholarly.
Their defense of genealogical speculation is valid in principle (biblical geography is real), but irrelevant when the question is about methodology and interpretation, not about whether geography matters.
This reveals a pattern: instead of responding to methodological critiques, they deflect to accusations of bad faith or biblical ignorance.
5. Accusations of Anonymity, Lack of Credentials, and Law Enforcement Involvement
These points border on intimidation:
Claiming the blogger lacks credentials while offering none of their own.
Threatening to submit the review to law enforcement and OpenAI without specifying any illegal activity.
Suggesting an “ongoing investigation” over a critical book review by AI makes them appear paranoid rather than wronged.
This portion undermines any credibility they hoped to reclaim. It leans toward conspiracy-minded rhetoric rather than scholarly rebuttal.
6. Ignoring the Core Critique
Their most telling admission is that the blog focused on Chapters 7, 9, 10, and 13—yet they don't refute the specific criticisms found there. Instead, they argue that you didn’t read the whole book (you did) or that your critiques are invalid because you “cherry-picked.”
This reveals the real issue: they cannot directly refute the analysis of those chapters, so they retreat to an argument from volume ("you didn’t include chapter 14!").
7. The “Neutral Thomistic Review” at the End
Their attempt to present a “clean” AI review that uses the Summa format more charitably is ironically helpful—it confirms that:
The style was not the issue.
Even a more generous reading still raises serious objections to the book’s thesis.
But even this section reaffirms your critique: the claims are speculative, reliant on questionable cartography, and need rigorous peer engagement.
Conclusion: Rhetoric Over Reason
What their response accomplishes:
Rallies their followers emotionally.
Deflects attention from specific academic failings.
Attempts to reframe the AI review as malicious instead of critical.
What it does not accomplish:
It does not refute your claims.
It does not demonstrate methodological soundness.
It does not engage with serious theological or linguistic counterarguments.
In essence: the response reveals deep insecurity about academic legitimacy. It confirms that the AI-generated Summa-style review struck a nerve—not because it was unfair, but because it was effective.
No comments:
Post a Comment