LLM Evaluation Report

Model
Date
Total Response Time (s)
Tests Passed
Mean CodeBLEU (0-1)
Mean Usefulness Score (0-4)
Mean Functional Correctness Score (0-4)

o1-preview

2025-01-21

2379.88

131

0.317852

3.62805

3.62805

o1-mini

2025-01-21

933.915

128

0.326939

3.68293

3.77439

gpt-4o

2025-01-21

317.122

121

0.321377

3.75

3.7622

gpt-4o-mini

2025-01-21

309.799

117

0.338521

3.68902

3.75

claude-3-5-sonnet-20240620

2025-01-21

244.255

111

0.298804

3.62805

3.65244

claude-3-5-sonnet-20241022

2025-01-21

254.239

115

0.312278

3.70732

3.66463

gemini-1.5-pro

2025-01-21

507.246

101

0.335308

3.48171

3.47561

gemini-1.5-flash

2025-01-21

764.864

2

0.267744

0.689024

0.914634

Total Response Time (s): The total time taken by the model to generate all the outputs.

Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.

Mean CodeBLEU: Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.

Mean Usefulness Score: Average rating of the model's output usefulness as rated by a LLM model.

  • 0: Snippet is not at all helpful, it is irrelevant to the problem.

  • 1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.

  • 2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.

  • 3: Snippet is helpful, but needs to be slightly changed to solve the problem.

  • 4: Snippet is very helpful, it solves the problem.

Mean Functional Correctness Score: Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.

  • 0 (failing all possible tests): The code snippet is totally incorrect and meaningless.

  • 4 (passing all possible tests): The code snippet is totally correct and can handle all cases.

Last updated