Mark,
Read the entry about the Gunnery test chart. You will find that the AI did provide a source.
"These figures come from U.S. Army Ordnance Dept. and British AFV Gunnery School trials between 1942–1943."
Posted 10 August 2025 - 05:32 PM
Mark,
Read the entry about the Gunnery test chart. You will find that the AI did provide a source.
"These figures come from U.S. Army Ordnance Dept. and British AFV Gunnery School trials between 1942–1943."
Posted 10 August 2025 - 11:31 PM
Happy to do my part, Bob! LOL!
The new stats make the M-3 Lee/Grant just a bit more formidable as I think they were when introduced to North Africa in mid 1942. Not a war winner by any means but now, at least at OM1, better able to deal with those pesky Mk IIIs and IVs.
I do appreciate all the time and effort everyone has taken to check things out on this. This has definitely been an interesting conversation.
Posted 31 August 2025 - 12:05 PM
Read the entry about the Gunnery test chart. You will find that the AI did provide a source.
"These figures come from U.S. Army Ordnance Dept. and British AFV Gunnery School trials between 1942–1943."
I'm afraid this does not qualify as a genAI bot giving a source.
I can not go find this source based on this statement. There are thousands of US Army Ordnance and British trials.
Note I did not just say US Army Field Manuals when stating a source for the stabilizer. I said WHICH field manual. That gives a third party a fighting chance of verifying my statement. That's what I am looking for ... a fighting chance of verifying the chat bot's statement. Because until it gives me a source ... a specific source that I can go and verify (whether I do or do not actually chase it down), then I retain a degree of skepticism. And I urge my colleagues to do the same. Don't let the chat bots fool you. They are going to give you and answer, whether they are right or wrong. So retain your skepticism. Make the bot tell you which source, specifically, it gets information from. So far, the genAI chat bots I have worked with will not explicitly lie. But they will mis-summarize or even hallucinate. They'll make stuff up.
I had a case of that just this past week with ChatGPT. It told me a company was a member of a trade association. I asked for a source of that information. It said that company was listed as a member. I asked where the list could be found. It then came back and said there were no verifiable sources that it was listed as a member. Aha! It had made it up.
Not that you care about which company is part of which trade association. But it is a perfect example of how the chat bots can get things wrong, and how you sometimes have to dig to ensure you have a good answer.
And ... I want to know the tests because I like to collect reports from such things. So even if the AI was right, I still want to know where I can get my digital hands on the test firing results.
-Mark
(aka: Mk 1)
0 members, 1 guests, 0 anonymous users