LLM Evaluation Report
o1-preview
2024-12-21
2222.02
135
0.315387
3.60366
3.62195
o1-mini
2024-12-21
742.336
128
0.34076
3.70122
3.71341
gpt-4o
2024-12-21
328.26
124
0.321923
3.70732
3.68293
gpt-4o-mini
2024-12-21
209.742
122
0.335439
3.64024
3.63415
claude-3-5-sonnet-20240620
2024-12-21
295.78
117
0.299314
3.66463
3.63415
claude-3-5-sonnet-20241022
2024-12-21
263.51
114
0.330973
3.67073
3.62805
gemini-1.5-pro
2024-12-21
507.269
94
0.347441
3.45122
3.43293
gemini-1.5-flash
2024-12-21
768.506
1
0.263737
0.628049
0.835366
Total Response Time (s): The total time taken by the model to generate all the outputs.
Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.
Mean CodeBLEU: Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.
Mean Usefulness Score: Average rating of the model's output usefulness as rated by a LLM model.
0: Snippet is not at all helpful, it is irrelevant to the problem.
1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.
2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.
3: Snippet is helpful, but needs to be slightly changed to solve the problem.
4: Snippet is very helpful, it solves the problem.
Mean Functional Correctness Score: Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.
0 (failing all possible tests): The code snippet is totally incorrect and meaningless.
4 (passing all possible tests): The code snippet is totally correct and can handle all cases.
Last updated