LLM Evaluation Report
Last updated
Last updated
o1-preview
2025-04-02
3264.19
134
0.320351
3.60976
3.59756
o1-mini
2025-04-02
964.977
129
0.336816
3.69512
3.75
gpt-4o
2025-04-02
228.668
128
0.310692
3.71951
3.67073
gpt-4o-mini
2025-04-02
248.679
116
0.321981
3.62805
3.61585
claude-3-5-sonnet-20240620
2025-04-02
276.394
108
0.30484
3.67683
3.66463
claude-3-5-sonnet-20241022
2025-04-02
291.706
112
0.328969
3.68902
3.70732
gemini-1.5-pro
2025-04-02
518.354
103
0.327295
3.46951
3.41463
gemini-1.5-flash
2025-04-02
763.949
0
0.261228
0.792683
1.32317
Total Response Time (s): The total time taken by the model to generate all the outputs.
Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.
Mean : Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.
Mean : Average rating of the model's output usefulness as rated by a LLM model.
0: Snippet is not at all helpful, it is irrelevant to the problem.
1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.
2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.
3: Snippet is helpful, but needs to be slightly changed to solve the problem.
4: Snippet is very helpful, it solves the problem.
Mean : Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.
0 (failing all possible tests): The code snippet is totally incorrect and meaningless.
4 (passing all possible tests): The code snippet is totally correct and can handle all cases.