FAQ

11/05/2025

If you have any questions, suggestions or criticisms about the HarpIA platform, please contact the KEML team using this form.

  1. Can I freely extend the HarpIA platform?
    The platform can be freely extended according to the distribution rules established by the Apache V2 . If you want your extension to be included in the HarpIA platform distribution, please contact the development team
  2. How do I integrate the HarpIA platform with Hugging Face to use datasets?
    In the current distribution of the modules of the HarpIA platform, it is not possible to use external datasets. It must be noted that the tools provided by the HarpIA platform focus on the evaluation of large language models (LLMs) and, for this reason, the input patterns expected by each tool are based on results typically generated by LLMs.
  3. How do I optimize an LLM using the HarpIA platform?
    The HarpIA platform focuses on evaluating the performance of large language models (LLMs). Although some modules of the platform interact with LLMs at runtime, they do not provide any functionalities specifically designed to support or facilitate the optimization of LLMs. In other words, the optimization of LLMS is not part of the functional requirements that the HarpIA platform aims to meet.
  4. Is the HarpIA platform available as an online service provided by the KEML@C4AI group?
    The availability of the HarpIA platform as a service (SaaS) is not included in the current release plan. The platform’s codebase is made fully and publicly available and interested parties must download the code and install it on their own infrastructure.
  5. If I have problems installing some module of the HarpIA platform, how can I get support?
    The group that develops the HarpIA platform is a research group. Unfortunately, we do not have the resources to provide personalized support to users. However, we would be happy to receive any criticisms or suggestions that may guide us in improving the platform or the support documentation. If you have any criticisms or suggestions, please contact us using this form.
  6. Does the HarpIA platform offer benchmarks for LLM evaluation?
    No. The HarpIA platform aims to assist its users in carrying out LLM evaluations in a systematic and responsible manner. However, the results of an evaluation using this platform are self-documented and can be shared by the researcher with other users, who can use the data as benchmarks.
  7. Is it possible to test the HarpIA platform outside of the Moodle environment?
    The HarpIA platform is composed of several modules and only the HarpIA Survey module has the Moodle server as a dependency. The HarpIA Lab module, for example, does not depend on a Moodle server and can be used locally via either the command line or the web interface. To do this, you need to clone the respective GitHub repository and prepare the environment on your infrastructure.
  8. I would like to contribute as an evaluator of LLMs. Can I apply for this?
    The recruitment of individuals to participate in a human-centric LLM evaluation is conducted outside the scope of the HarpIA Survey. In other words, each researcher who adopts the HarpIA Survey to conduct an LLM evaluation recruits potential participants using their own tools and procedures. The HarpIA platform does not offer functionalities specifically designed to coordinate this recruitment effort.
  9. Can I run experiments with large language models using the HarpIA platform?
    The HarpIA platform is positioned as a tool to support researchers who need to evaluate LLMs, either automatically or with the involvement of human evaluators. You can integrate the HarpIA platform into the workflow of an experiment. For example, suppose you want to evaluate a set of LLMs fine-tuned to perform a specific task. First, each of the languagem models needs to be exposed to the test dataset that was specified by the researcher. The responses generated by the LLMs must be organized in a JSON file in the format expected by the HarpIA platform. From this point on, the HarpIA Lab module can evaluate the performance of each model according to the metrics selected by the researcher. The result of the evaluation produced by the HarpIA Lab module contributes to the experiment by providing an organization that fosters the reproducibility and auditability of the evaluations performed.