Home / Models & Research / Pramana: Fine-Tuning Large Language Models for Epistemic Reasoning Through Navya-Nyaya
Models & Research Wednesday, 8 April 2026 | 1 min read

Pramana: Fine-Tuning Large Language Models for Epistemic Reasoning Through Navya-Nyaya

Apple researchers have developed Pramana, a system that fine-tunes large language models (LLMs) for epistemic reasoning through the ancient Indian philosophy of Navya-Nyaya. This approach improves LLMs' ability to reason systematically, reducing hallucinations and confident but unfounded claims. The researchers added irrelevant context to mathematical problems, causing LLM performance to degrade by 65%.

This finding has implications for the development of more reliable AI systems. Pramana's success suggests that fine-tuning LLMs with epistemic reasoning tasks can improve their performance and robustness.

Original Sources

Tags

#large language models #epistemic reasoning #Navya-Nyaya
All stories