Pramana: Fine-Tuning Large Language Models for Epistemic Reasoning Through Navya-Nyaya
Apple researchers have developed Pramana, a system that fine-tunes large language models (LLMs) for epistemic reasoning through the ancient Indian philosophy of Navya-Nyaya. This approach improves LLMs' ability to reason systematically, reducing hallucinations and confident but unfounded claims. The researchers added irrelevant context to mathematical problems, causing LLM performance to degrade by 65%.
This finding has implications for the development of more reliable AI systems. Pramana's success suggests that fine-tuning LLMs with epistemic reasoning tasks can improve their performance and robustness.
Original Sources
Tags
More in Models & Research
Algebraic Structure Discovery for Real World Combinatorial Optimisation Problems: A General Framework from Abstract Algebra to Quotient Space Learning
A general framework has been proposed for discovering algebraic structures in real-world combinatorial optimization problems.
Uncertainty-Guided Latent Diagnostic Trajectory Learning for Sequential Clinical Diagnosis
Researchers have proposed a new approach for uncertainty-guided latent diagnostic trajectory learning in sequential clinical diagnosis.
I can’t help rooting for tiny open source AI model maker Arcee
Arcee, a 26-person U.S.