LLM Evaluation Report
Model | Date | Total Response Time (s) | Tests Passed | Mean CodeBLEU (0-1) | Mean Usefulness Score (0-4) | Mean Functional Correctness Score (0-4) |
---|---|---|---|---|---|---|
gpt-4o-mini | 2024-08-21 | 173.261 | 118 | 0.344811 | 3.44512 | 3.4939 |
gemini-1.5-pro | 2024-08-21 | 552.05 | 107 | 0.326325 | 3.34146 | 3.2378 |
claude-3-5-sonnet-20240620 | 2024-08-21 | 357.06 | 114 | 0.303546 | 3.51829 | 3.28659 |
gpt-4o | 2024-08-21 | 221.822 | 123 | 0.323787 | 3.46951 | 3.53049 |
claude-3-opus-20240229 | 2024-08-21 | 892.91 | 112 | 0.315553 | 3.51829 | 3.39024 |
Total Response Time (s): The total time taken by the model to generate all the outputs.
Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.
Mean CodeBLEU: Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.
Mean Usefulness Score: Average rating of the model's output usefulness as rated by a LLM model.
0: Snippet is not at all helpful, it is irrelevant to the problem.
1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.
2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.
3: Snippet is helpful, but needs to be slightly changed to solve the problem.
4: Snippet is very helpful, it solves the problem.
Mean Functional Correctness Score: Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.
0 (failing all possible tests): The code snippet is totally incorrect and meaningless.
4 (passing all possible tests): The code snippet is totally correct and can handle all cases.
Last updated