Can we trust large language models to summarize food policy research papers and generate research briefs?
Generative large language models (LLMs), while widely accessible and capable of simulating policy recommendations, pose challenges in the assessment of their accuracy. Users, including policy analysts and decision-makers, bear the responsibility of evaluating the outcomes from these models. A signif...
| Autores principales: | , , |
|---|---|
| Formato: | Artículo preliminar |
| Lenguaje: | Inglés |
| Publicado: |
International Food Policy Research Institute
2023
|
| Materias: | |
| Acceso en línea: | https://hdl.handle.net/10568/137600 |
Ejemplares similares: Can we trust large language models to summarize food policy research papers and generate research briefs?
- Trust the messenger? The role of AI transparency in policy research communication
- Can we trust AI to generate agricultural extension advisories?
- Man vs. machine: Experimental evidence on the quality and perceptions of AI-generated research content
- Large language models and agricultural extension services
- AI in qualitative research: Using large language models to code survey responses in native languages
- Longa: An automated speech recognition tool for Bantu languages