Not just RMSE, MAE
or LogLoss, but
work evaluation

visual image for Simple or in-depth model scoring

Simple or in-depth model scoring

You get to choose! Either you rank the participants based on how well their data submission predicts outcomes, using our preloaded evaluation metrics. Or you start from the models delivered by the participants, eventually use custom metrics and extend your analysis towards model compute efficiency or interpretability.

visual image for Project and code quality assessment

Project and code quality assessment

HFactory integrates a fully automated scoring of the organisation of the Git project, the compliance of the code, and the quality of the documentation. Beyond grading, the resulting project & code quality report is a great tool to stimulate adoption of coding and software engineering best practices with data scientists.

visual image for Configurable scoring dimensions

Configurable scoring dimensions

HFactory lets you add new evaluation dimensions beyond the objective metrics from the model and code analysis, for example the quality of the final pitch in the case of a data challenge. And of course, you can fully adjust the weights of the various scoring parameters so that they perfectly fit your pedagogical objectives.