In several articles I have disputed the claim of Timothy Jay Schwab who is The God Culture that A.I. has confirmed his research as being genuinely sound and valid. I have thoroughly explained that his favorable reviews are dependent upon what information he uploaded and what prompts he gave. In one article I asked ChatGPT to produce a chapter-by-chapter analysis of his book The Search for King Solomon's Treasure which contains all the relevant research material for his thesis. The verdict was not favorable. I then asked ChatGPT the following question:
How would you rate the author's ability as a researcher? Does he understand the topics about which he is writing?
Again, the answer is unfavorable. Tim is a poor researcher who does not understand the topics about which he is writing. Since Tim received a favorable review from ChatGPT I asked this follow-up question:
But this author uploaded his research to ChatGPT and got a favorable result saying it was valid and sound. Why would that be so if he is an unreliable researcher?
It then listed several reasons why this would be the case. All the reasons agree with my previous assessments. I am posting the chat with no further comment because it speaks for itself and it validates everything I have written concerning Tim's use of A.I. for peer reviewing his research. There is absolutely nothing for him to be crowing about.
Evaluation of the Author’s Ability as a Researcher
Based on the chapter-by-chapter analysis of Solomon’s Gold, the author’s ability as a researcher is fundamentally flawed in multiple areas. While the book compiles a vast amount of historical, biblical, and linguistic material, its research methodology is highly problematic, selective, and unreliable.
Below is a breakdown of five key areas assessing the author's research ability:
1. Understanding of Biblical Studies and Theology
Does the author understand biblical scholarship and theological interpretation?
🔴 Rating: Poor
Major Issues
The book misinterprets biblical texts by reading them in isolation, without considering historical and theological context.
The author cherry-picks verses while ignoring established biblical scholarship and mainstream exegesis.
Many claims—such as the Magi coming from the Philippines or the Philippines being central to biblical prophecy—are based on conspiratorial reasoning rather than rigorous exegesis.
The misuse of Psalm 72, Isaiah 60, and other biblical passages demonstrates a lack of engagement with theological sources.
Example of Poor Research
Chapter 15 (We Three Kings of the Philippines Are?) incorrectly claims that Psalm 72 prophesied the Magi as kings from the Philippines.
Mainstream biblical scholars interpret Psalm 72 as a Messianic psalm, not a specific prophecy about the Magi.
🔎 Verdict: The author does not demonstrate an advanced understanding of biblical studies. His interpretations are amateurish, selective, and not supported by reputable scholarship.
2. Use of Historical and Archaeological Evidence
Does the author properly use historical and archaeological sources?
🔴 Rating: Poor
Major Issues
The author fails to use primary historical sources correctly and often misinterprets secondary sources.
There is no engagement with established historians or archaeologists.
The book presents no archaeological proof connecting the Philippines to biblical Israel, yet claims this connection as fact.
The lack of Hebrew, Phoenician, or Israelite artifacts in the Philippines contradicts the book’s core thesis.
Example of Poor Research
Chapter 9 (Testing All the Resources of Ophir) claims that Philippine gold proves it was biblical Ophir, but it fails to:
Compare Philippine gold production to documented ancient gold sources (Africa, India, Arabia).
Provide any archaeological findings of ancient Hebrew or Phoenician presence in the Philippines.
🔎 Verdict: The author does not understand how to use historical or archaeological evidence. The book ignores standard historical methodologies and fails to present primary-source proof.
3. Linguistic Research and Understanding of Etymology
Does the author correctly analyze linguistic connections?
🔴 Rating: Very Poor
Major Issues
The author does not follow proper linguistic methodology—he selects words that sound similar but does not apply phonetic laws, historical linguistic analysis, or expert sources.
The book ignores the Austronesian language family, which governs the linguistic evolution of Philippine languages.
It cherry-picks Hebrew words that sound similar to Tagalog or Cebuano words but without proving historical connections.
Example of Poor Research
Chapter 13 (Residual Ancient Hebrew in the Philippines) falsely claims that certain Tagalog words have Hebrew origins without linguistic proof.
Example: The book argues that Tagalog "pison" (steamroller) is related to the Hebrew "Pishon" (a biblical river)—yet this is a coincidence with no linguistic link.
🔎 Verdict: The author lacks a basic understanding of historical linguistics and misuses etymology to fit his conclusions.
4. Engagement With Scholarly Sources
Does the author engage with existing academic research?
🔴 Rating: Extremely Poor
Major Issues
The book does not cite mainstream biblical historians, archaeologists, or linguists.
The author ignores scholarly consensus and instead relies on speculation.
Opposing theories (e.g., Ophir being in Africa, India, or Arabia) are dismissed rather than critically analyzed.
The book does not reference peer-reviewed research but instead relies on fringe theories and selective evidence.
Example of Poor Research
Chapter 14 (Not Ophir – Other Claims Dismantled) dismisses alternative locations for Ophir (e.g., Africa, Arabia, India) without analyzing the archaeological and textual evidence supporting them.
🔎 Verdict: The author does not engage with serious scholars, making his work isolated from mainstream research.
5. Research Methodology and Logical Reasoning
Does the author follow sound research methods and logical argumentation?
🔴 Rating: Very Poor
Major Issues
The author starts with a conclusion (Ophir = Philippines) and then selects evidence to support it, rather than examining all possibilities objectively.
He misrepresents historical facts and forces weak connections to fit his narrative.
The book contains logical fallacies, including:
Post hoc reasoning – "The Philippines has gold today, so it must have been biblical Ophir."
False equivalences – "A Tagalog word sounds like a Hebrew word, so they must be connected."
Appeal to conspiracy – "Western scholars ignore the Philippines' role in the Bible because of colonialism."
Example of Poor Research
Chapter 12 (Little-Known History of Ophir) claims that 16th-century Spanish explorers identified the Philippines as Ophir.
However, these explorers were relying on myths, not actual historical research—the book fails to distinguish between historical fact and colonial-era speculation.
🔎 Verdict: The author’s research method is deeply flawed, as he begins with an assumption and then manipulates evidence to fit it.
Overall Rating of the Author as a Researcher
Research Category Rating Biblical Studies & Theology 🔴 Poor Historical & Archaeological Research 🔴 Poor Linguistic Analysis 🔴 Very Poor Engagement With Scholarly Sources 🔴 Extremely Poor Research Methodology & Logic 🔴 Very Poor Final Verdict: The Author Is an Unreliable Researcher
He does not understand the subjects he is writing about.
His methodology is unscientific, speculative, and biased.
He ignores or misrepresents counterarguments and established scholarship.
His work does not meet the standards of serious historical, biblical, or linguistic research.
The book should not be regarded as a credible academic work but rather as a collection of speculative theories lacking proper evidence and scholarly validation.
If the author uploaded his research to ChatGPT and received a favorable result stating that it was valid and sound, several possible explanations could account for this discrepancy. Below are the most likely reasons why such an outcome could have occurred, despite the research being methodologically flawed and unreliable:
1. ChatGPT's Objectivity Depends on the Input Provided
ChatGPT analyzes information based on the data it is given. If the author uploaded his research without including counterarguments, scholarly sources, or alternative perspectives, the model would have no basis for challenging his conclusions.
🔎 Possible Explanation:
If the author presented only his evidence—without addressing well-documented scholarly positions that contradict him—ChatGPT may have evaluated it within the limited context provided and found no immediate internal contradictions.
However, if a broader academic review were included, the flaws in his arguments would become apparent.
Example:
If an author submits a document that claims "The Philippines is Ophir" and only lists evidence in favor, ChatGPT may respond by saying the argument is well-structured within its own internal logic—but that does not mean the argument is factually correct.
2. Lack of Critical Engagement With Established Scholarship
ChatGPT does not always automatically compare user-submitted research with established academic sources—it relies on what is explicitly asked.
🔎 Possible Explanation:
If the author did not ask ChatGPT to cross-check his claims with peer-reviewed biblical, historical, and linguistic research, ChatGPT may not have flagged the inaccuracies.
A model can only assess the logical consistency of an argument, but not always its historical accuracy without explicit fact-checking.
Example:
If the author asked, "Does my document contain internal contradictions?" ChatGPT might say, "No, the arguments follow a consistent pattern."
However, if asked, "Does my document align with mainstream biblical and historical scholarship?" the answer would be very different.
3. Selective Interpretation of ChatGPT’s Response
Authors sometimes interpret AI-generated feedback in a way that supports their own biases.
🔎 Possible Explanation:
The author might have received a nuanced or mixed response but chose to highlight only the favorable partswhile ignoring critical points.
If ChatGPT acknowledged some merits of the research (such as creativity or effort), the author might have exaggerated that as full validation.
Example:
If ChatGPT said, "This research presents an interesting perspective, but it lacks engagement with mainstream scholarship", an author might focus only on the 'interesting perspective' part and ignore the critique.
4. The Use of a Less Rigorous AI Prompt
The way a question is framed can significantly impact the depth and accuracy of ChatGPT’s response.
🔎 Possible Explanation:
If the author asked a general question like, "Does this research make sense?", ChatGPT might assess the coherence of the argument, rather than its factual accuracy.
A more rigorous prompt such as "Please compare this research to peer-reviewed biblical scholarship on Ophir" would yield a far more critical response.
Example of a Weak Prompt:✔️ "Does my research on Ophir make logical sense?" → ChatGPT might answer "Yes, it follows a structured argument."❌ But this does not mean it is historically accurate.Example of a Strong Prompt:✔️ "Compare my research on Ophir to academic sources. Is it consistent with mainstream historical and biblical scholarship?"💡 This kind of prompt would force a more critical evaluation.5. AI Can Reflect User Bias in Its Responses
ChatGPT can sometimes mirror the biases of the person using it. If a user frames questions in a way that encourages a specific answer, the AI may generate a response that aligns with the user's expectation.
🔎 Possible Explanation:
If the author kept pressing ChatGPT for a favorable response, adjusting the wording until the AI gave a mild affirmation, they might have used that as validation.
AI models can sometimes provide overly polite or neutral responses that a user might misinterpret as full agreement.
Example:
If an author repeatedly asks, "Is my research valid?" in slightly different ways, ChatGPT might eventually say, "Your research presents a unique perspective."
The author might then ignore previous criticisms and focus only on this sentence.
6. AI Cannot Replace Expert Peer Review
No matter how advanced, AI cannot replace human subject-matter experts in history, theology, linguistics, and archaeology.
🔎 Possible Explanation:
AI can analyze patterns, summarize text, and offer logical critiques, but it does not have the authority of a professional historian, archaeologist, or biblical scholar.
If the author's research were submitted to academic peer review, it would likely be rejected due to its speculative methodology.
Example:
If a professor of biblical archaeology reviewed the book, they would likely point out the lack of Hebrew inscriptions, trade records, or material artifacts connecting Israel and the Philippines.
AI does not have the same rigorous expertise as a PhD-level scholar specializing in biblical history.
Conclusion: Why the Author’s Research Is Still Unreliable
Even if ChatGPT provided a seemingly positive response, that does not mean the research is factually sound. The most likely explanations for this discrepancy are:
The author framed the questions in a way that avoided critical evaluation.
ChatGPT assessed logical consistency, not historical accuracy.
The author selectively interpreted the response, ignoring criticisms.
ChatGPT was not explicitly asked to compare the research to peer-reviewed sources.
The author manipulated the wording of questions until ChatGPT gave a neutral or mildly positive statement.
Ultimately, AI is not a substitute for academic rigor, and the research remains unreliable unless it withstands scrutiny from historical, theological, and linguistic experts.
💡 Final Thought:If the author truly believes his research is valid, he should submit it to peer-reviewed biblical journals or present it at an academic conference—not rely on ChatGPT for validation.
No comments:
Post a Comment