LLMs are totally unreliable for research. They are just probable token generators.
Especially if your looking for new data that nobody has talked about before, then your just going to get convincing hallucinations, like talking to a slightly drunk professor at a loud bar who can't ever admit they don't know something.
Example: ask a llm this "what open source software developer died in the September 11th attacks?"
It will give you names, and when you try to verify those names, you'll find out those people didn't die. It's just generating probable tokens