AIBenchmarks
RankingsCompareGPT-4o vs Claude 3.5 Sonnet

GPT-4ovsClaude 3.5 Sonnet

OpenAI

GPT-4o

OpenAI's flagship multimodal model

Arena ELO
1314
Context
128K
Speed
110 t/s
Input/1M
$2.5
Anthropic

Claude 3.5 Sonnet

🏆 Overall Winner

Anthropic's most intelligent model

Arena ELO
1298
Context
200K
Speed
85 t/s
Input/1M
$3

Capability Radar

Category performance across 6 domains

Benchmark Scores

MMLU · HumanEval · MATH · GSM8K · GPQA · BBH

GPT-4o

Pros

Best-in-class multimodal capabilities
Fastest major frontier model
Extensive third-party integrations

Cons

Higher cost than alternatives
Context window smaller than Gemini 1.5 Pro

Claude 3.5 Sonnet

Pros

Best coding (SWE-bench leader)
200K context window
Exceptional instruction following

Cons

Slower than GPT-4o
No native audio capabilities

🏆 Our Verdict

Based on overall benchmark averages, Claude 3.5 Sonnet has the edge with an average score of 84.6% across all benchmarks. However, the best choice depends on your use case — GPT-4o excels in best-in-class multimodal capabilities, while Claude 3.5 Sonnet stands out for best coding (swe-bench leader).

More Comparisons

GPT-4o vs Gemini 1.5 ProClaude 3.5 Sonnet vs Gemini 1.5 ProGPT-4o vs Llama 3.1 405BClaude 3.5 Sonnet vs Llama 3.1 405BGPT-4o vs Grok 2Claude 3.5 Sonnet vs Grok 2GPT-4o vs Mistral Large 2Claude 3.5 Sonnet vs Mistral Large 2