LLM Evaluation Report
Last updated
Last updated
Model | Date | Total Response Time (s) | Tests Passed | Mean CodeBLEU (0-1) | Mean Usefulness Score (0-4) | Mean Functional Correctness Score (0-4) |
---|---|---|---|---|---|---|
Total Response Time (s): The total time taken by the model to generate all the outputs.
Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.
Mean CodeBLEU: Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.
Mean Usefulness Score: Average rating of the model's output usefulness as rated by a LLM model.
0: Snippet is not at all helpful, it is irrelevant to the problem.
1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.
2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.
3: Snippet is helpful, but needs to be slightly changed to solve the problem.
4: Snippet is very helpful, it solves the problem.
Mean Functional Correctness Score: Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.
0 (failing all possible tests): The code snippet is totally incorrect and meaningless.
4 (passing all possible tests): The code snippet is totally correct and can handle all cases.
o1-preview
2024-11-20
2006
131
0.316933
3.60366
3.64024
o1-mini
2024-11-20
680.368
133
0.342896
3.68293
3.7561
gpt-4o
2024-11-20
354.689
126
0.322102
3.7378
3.75
gpt-4o-mini
2024-11-20
201.423
112
0.33042
3.67073
3.72561
claude-3-5-sonnet-20240620
2024-11-20
318.568
111
0.306173
3.66463
3.64024
claude-3-5-sonnet-20241022
2024-11-20
327.833
109
0.327235
3.65854
3.64634
gemini-1.5-pro
2024-11-20
516.921
92
0.333394
3.5061
3.5122
gemini-1.5-flash
2024-11-20
759.693
2
0.270065
0.670732
0.829268