LLM Evaluation Report

ModelDateTotal Response Time (s)Tests PassedMean CodeBLEU (0-1)Mean Usefulness Score (0-4)Mean Functional Correctness Score (0-4)

gpt-4o-mini

2024-10-18

180.098

113

0.331988

3.66463

3.65854

gemini-1.5-pro

2024-10-18

533.694

104

0.338663

3.55488

3.59756

claude-3-5-sonnet-20240620

2024-10-18

339.244

112

0.300819

3.68293

3.65854

gpt-4o

2024-10-18

201.997

128

0.314057

3.75

3.71951

o1-mini

2024-10-18

773.989

130

0.335063

3.71951

3.71951

o1-preview

2024-10-18

2207.5

127

0.322271

3.60366

3.60976

claude-3-opus-20240229

2024-10-18

1056.03

114

0.322514

3.7439

3.67683

Total Response Time (s): The total time taken by the model to generate all the outputs.

Tests passed: The number of unit tests that the model has passed during evaluation, out of a total of 164 tests.

Mean CodeBLEU: Average CodeBLEU score, a metric for evaluating code generation quality based on both syntactic and semantic correctness.

Mean Usefulness Score: Average rating of the model's output usefulness as rated by a LLM model.

  • 0: Snippet is not at all helpful, it is irrelevant to the problem.

  • 1: Snippet is slightly helpful, it contains information relevant to the problem, but it is easier to write the solution from scratch.

  • 2: Snippet is somewhat helpful, it requires significant changes (compared to the size of the snippet), but is still useful.

  • 3: Snippet is helpful, but needs to be slightly changed to solve the problem.

  • 4: Snippet is very helpful, it solves the problem.

Mean Functional Correctness Score: Average score of the functional correctness of the model's outputs, assessing how well the outputs meet the functional requirements, rated by a LLM model.

  • 0 (failing all possible tests): The code snippet is totally incorrect and meaningless.

  • 4 (passing all possible tests): The code snippet is totally correct and can handle all cases.

Last updated