LLM Evaluation Report
Model | Date | Total Response Time (s) | Tests Passed | Mean CodeBLEU (0-1) | Mean Usefulness Score (0-4) | Mean Functional Correctness Score (0-4) |
---|---|---|---|---|---|---|
gpt-4o-mini | 2024-07-24 | 204.228 | 118 | 0.335525 | 3.42073 | 3.4939 |
gemini-1.5-pro | 2024-07-24 | 599.036 | 102 | 0.334868 | 3.31098 | 3.07317 |
claude-3-5-sonnet-20240620 | 2024-07-24 | 354.219 | 112 | 0.303197 | 3.54268 | 3.32317 |
gpt-4o | 2024-07-24 | 306.37 | 123 | 0.318556 | 3.54268 | 3.54268 |
gpt-4-turbo | 2024-07-24 | 499.425 | 108 | 0.328721 | 3.5122 | 3.48171 |
gpt-4-turbo-preview | 2024-07-24 | 427.12 | 110 | 0.317406 | 3.5061 | 3.39634 |
claude-3-opus-20240229 | 2024-07-24 | 1014.84 | 106 | 0.310753 | 3.5 | 3.21951 |
Total Response Time (s): The total time taken by the model to generate all the outputs.
Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.
Mean CodeBLEU: Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.
Mean Usefulness Score: Average rating of the model's output usefulness as rated by a LLM model.
0: Snippet is not at all helpful, it is irrelevant to the problem.
1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.
2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.
3: Snippet is helpful, but needs to be slightly changed to solve the problem.
4: Snippet is very helpful, it solves the problem.
Mean Functional Correctness Score: Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.
0 (failing all possible tests): The code snippet is totally incorrect and meaningless.
4 (passing all possible tests): The code snippet is totally correct and can handle all cases.
Last updated