Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Hugging Face unveils new benchmark for AI in healthcare

Apr, 20, 2024 Hi-network.com

Hugging Face has released a benchmark for testing generative artificial intelligence (AI) on health tasks. The benchmark called Open Medical-LLM is part of a larger effort to improve the performance and safety of large language models (LLMs) in various applications, including healthcare.


Open Medical-LLM is a collection of existing test sets - MedQA, PubMedQA, MedMCQA, etc. - aimed to evaluate models for general medical knowledge and health domains such as pharmacology or clinical practice. The federated benchmark platform includes multiple-choice and open-ended questions, as well as question banks from medical licensing examinations, to provide model evaluation and comparison.

Why does it matter?


The Hugging Face Hub provides access to a wide range of datasets, transformers, and evaluation tools, making it easier for researchers to compare and contrast different models. Open Medical-LLM adheres to standardized measurement approaches for medical AI, ensuring that the results are comparable across different models and tasks. Hugging Face also offers other benchmarks and evaluation tools, such as the 'Hallucinations Leaderboard', which focuses on evaluating the ability of LLMs to handle various types of hallucinations in text generation. These benchmarks showcase Hugging Face's wider effort to advance and democratize AI through open source and open science, as well as to ensure the safety and trustworthiness of AI systems.

tag-icon Tags chauds: Intelligence artificielle

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.