| Sumario: | This study presents a human-in-the-loop framework to enhance the accuracy, inclusivity, and contextual relevance of GPT-5-based agricultural advisories in Kenya. Using over 2,800 real farmer queries from the iShamba SMS platform, researchers applied prompt optimization, expert review, and Reinforcement Learning from Human Feedback (RLHF) to refine AI responses. The refined model achieved a 27% increase in satisfactory answers and a technical accuracy score of 1.95/2, outperforming baseline systems. Six key bias types—gender, social, regional, commercial, and linguistic—were identified and mitigated through localized data and bilingual support. The paper demonstrates how RLHF and participatory design can align generative AI with smallholder farmers’ needs, producing advice that is scientifically sound, equitable, and culturally grounded. Findings underscore the potential of inclusive AI frameworks to democratize climate-smart agricultural knowledge while safeguarding against bias in digital extension services
|