Shipping with Codex

Codex: Kỹ Sư Phần Mềm AI Đã Tạo Ra “Sự Thay Đổi Cảm Hứng Lớn” (Vibe Shift) Trong Lập Trình

Gần đây, tại OpenAI, chúng ta đã chứng kiến một điều phi thường: một “sự thay đổi cảm hứng lớn” (vibe shift) trong cách chúng tôi xây dựng phần mềm. Kể từ tháng Tám, mức độ sử dụng Codex—kỹ sư phần mềm AI của chúng tôi—đã tăng gấp mười lần. Codex không chỉ là một công cụ; nó giống như một đồng nghiệp con người mà bạn có thể lập trình đôi, giao phó công việc hoặc đơn giản là để nó tự động thực hiện các nhiệm vụ phức tạp.

Sự bùng nổ này không phải ngẫu nhiên. Nó là kết quả của hàng loạt cập nhật lớn, biến Codex thành một tác nhân mạnh mẽ, an toàn và trực quan hơn, hoạt động trên mọi nền tảng mà bạn xây dựng.

1. Những Cập Nhật Đã Tạo Nên Sự Thay Đổi Lớn

Đại Tu Hoàn Toàn Tác Nhân (Agent Overhaul)

Chúng tôi định nghĩa tác nhân Codex là sự kết hợp của hai yếu tố: mô hình lý luậnbộ công cụ (harness) cho phép nó hành động và tạo ra giá trị.

  • Mô Hình Nâng Cấp: Ban đầu, chúng tôi đã ra mắt GPT-5, mô hình tác nhân tốt nhất của mình. Dựa trên phản hồi, chúng tôi tiếp tục tối ưu hóa và cho ra mắt GPT-5 Codex, một mô hình được tinh chỉnh đặc biệt cho công việc mã hóa. Người dùng mô tả nó như một “kỹ sư cấp cao thực thụ” vì nó không ngại đưa ra phản hồi thẳng thắn và từ chối những ý tưởng tồi.
  • Hệ Thống Công Cụ Mới (Harness): Chúng tôi đã viết lại hoàn toàn bộ công cụ để tận dụng tối đa các mô hình mới. Hệ thống này bổ sung các tính năng quan trọng như lập kế hoạch (planning), nén tự động bối cảnh (autoco compaction)—cho phép các cuộc trò chuyện và tương tác cực kỳ dài—và hỗ trợ cho MCP (Multi-Context Protocol).

Trải Nghiệm Người Dùng Được Tinh Chỉnh

Dù mô hình và tác nhân mạnh mẽ, phản hồi ban đầu cho thấy giao diện dòng lệnh (CLI) còn “sơ khai”.

  • CLI Revamp: Chúng tôi đã đại tu CLI, đơn giản hóa chế độ phê duyệt (approvals modes), tạo ra giao diện người dùng dễ đọc hơn và thêm nhiều chi tiết tinh tế. Quan trọng nhất, Codex CLI hiện mặc định hoạt động với sandboxing (môi trường hộp cát), đảm bảo an toàn theo mặc định trong khi vẫn trao toàn quyền kiểm soát cho người dùng.
  • Tiện Ích Mở Rộng IDE: Để hỗ trợ người dùng muốn xem và chỉnh sửa code cùng lúc với việc cộng tác với Codex, chúng tôi đã phát hành một extension bản địa (native extension) cho IDE. Tiện ích này hoạt động với VS Code, Cursor và các bản fork phổ biến khác. Nó đã bùng nổ ngay lập tức, thu hút 100.000 người dùng trong tuần đầu tiên—bằng cách sử dụng cùng một tác nhân mạnh mẽ và bộ công cụ mã nguồn mở (open-source harness) đã cung cấp sức mạnh cho CLI.
  • Codex Cloud Nhanh Hơn: Chúng tôi đã nâng cấp Codex Cloud để chạy nhiều tác vụ song song, tăng tốc độ tác vụ Cloud lên 90%. Tác vụ Cloud giờ đây có thể tự động thiết lập các phụ thuộc, thậm chí xác minh công việc bằng cách chụp ảnh màn hình và gửi cho bạn.

Codex Hoạt Động Mọi Nơi

Codex giờ đây hoạt động tích hợp sâu vào quy trình làm việc của bạn:

  • Slack và GitHub: Codex có thể được giao nhiệm vụ trực tiếp trong các công cụ cộng tác như Slack. Nó nhận toàn bộ bối cảnh từ luồng trò chuyện, tự mình khám phá vấn đề, viết code, và đăng giải pháp cùng với bản tóm tắt chỉ sau vài phút.
  • Review Code Độ Tin Cậy Cao: Việc đánh giá và duyệt code đang trở thành nút thắt cổ chai lớn. Chúng tôi đã huấn luyện GPT-5 Codex đặc biệt để thực hiện review code cực kỳ kỹ lưỡng (ultra thorough). Nó khám phá toàn bộ code và các phụ thuộc bên trong container của mình, xác minh ý định và việc triển khai. Kết quả là những phát hiện có tín hiệu rất cao (high signal), đến mức nhiều đội đã bật nó theo mặc định, thậm chí cân nhắc bắt buộc.

2. Codex Đang Thúc Đẩy OpenAI Như Thế Nào

Kết quả nội bộ tại OpenAI là minh chứng rõ ràng nhất cho sức mạnh của Codex:

  • 92% nhân viên kỹ thuật của OpenAI sử dụng Codex hàng ngày (tăng từ 50% vào tháng Bảy).
  • Các kỹ sư sử dụng Codex nộp 70% nhiều PR (Pull Requests) hơn mỗi tuần.
  • Hầu như tất cả các PR đều được Codex review. Khi nó tìm thấy lỗi, các kỹ sư cảm thấy hào hứng vì nó giúp tiết kiệm thời gian và tăng độ tin cậy khi triển khai.

3. Các Quy Trình Làm Việc Thực Tế Hàng Ngày

Các kỹ sư của chúng tôi đã chia sẻ những ví dụ thực tế về cách họ sử dụng Codex để giải quyết các vấn đề phức tạp.

Trường Hợp 1: Lặp Lại Giao Diện Người Dùng (UI) Với Bằng Chứng Hình Ảnh (Nacho)

Nacho, kỹ sư iOS, đã chia sẻ quy trình làm việc tận dụng tính năng đa phương thức (multimodal) của Codex:

  • Vấn Đề: Trong công việc front-end, 10% công việc đánh bóng cuối cùng—như căn chỉnh header/footer—thường chiếm đến 90% thời gian.
  • Giải Pháp: Nacho giao cho Codex nhiệm vụ triển khai UI từ một bản mockup. Khác với các agent cũ (được ví như “kỹ sư tập sự”), Codex (được ví như “kỹ sư cấp cao”) xác minh công việc của nó.
  • Quy Trình TDD & Multimodal:
    1. Nacho cung cấp cho Codex một công cụ đơn giản: một script Python (do Codex viết) để trích xuất các snapshot (ảnh chụp giao diện) từ các SwiftUI Previews.
    2. Nó được hướng dẫn sử dụng công cụ này để xác minh trực quan code UI mà nó viết.
    3. Codex lặp đi lặp lại: Viết code > Chạy test/Snapshot > Sửa lỗi cho đến khi giao diện đạt đến độ hoàn hảo về pixel (pixel perfect).
  • Kết Quả: Nacho có thể để Codex làm việc trên những chi tiết nhỏ (10% độ đánh bóng) trong khi anh làm những việc khác, biết rằng nó sẽ tự kiểm tra công việc của mình bằng hình ảnh.

Trường Hợp 2: Mở Rộng Giới Hạn Tác Vụ Lớn (Fel)

Fel, được biết đến là người có phiên làm việc Codex lâu nhất (hơn bảy giờ) và xử lý nhiều token nhất (hơn 150 triệu), đã chứng minh cách anh thực hiện các tác vụ refactor lớn chỉ với vài lời nhắc.

  • Vấn Đề: Thực hiện một refactor lớn (như thay đổi 15.000 dòng code) trong các dự án phức tạp (như bộ phân tích JSON cá nhân của anh) thường dẫn đến việc tất cả các bài kiểm tra thất bại trong thời gian dài.
  • Giải Pháp: Kế Hoạch Thực Thi (Exec Plan):
    1. Fel yêu cầu Codex viết một đặc tả (spec)—được gọi là plans.md—để triển khai tính năng, giao cho nó nhiệm vụ nghiên cứu thư viện và cách tích hợp.
    2. Anh định nghĩa plans.md là một “tài liệu thiết kế sống” (living document) mà Codex phải liên tục cập nhật, bao gồm mục tiêu lớn, danh sách việc cần làm, tiến trình, và nhật ký quyết định (decision log).
    3. Anh sử dụng thuật ngữ neo “exec plan” để đảm bảo mô hình biết khi nào cần tham chiếu và phản ánh lại tài liệu này.
    4. Sau khi Fel phê duyệt kế hoạch, anh ra lệnh: “Implement” (Thực thi).
  • Kết Quả: Codex có thể làm việc một cách hiệu quả trong nhiều giờ (thậm chí hơn một giờ trong buổi demo) trên một tính năng lớn, sử dụng plans.md như bộ nhớ và kim chỉ nam. Trong một phiên, nó đã tạo ra 4.200 dòng code chỉ trong khoảng một giờ—mọi thứ đều được kiểm tra và vượt qua.

Trường Hợp 3: Vòng Lặp Sửa Lỗi và Review Tại Chỗ (Daniel)

Daniel, một kỹ sư trong nhóm Codex, đã giới thiệu quy trình làm việc slash review mới, đưa khả năng review code chất lượng cao của GPT-5 Codex xuống môi trường cục bộ (local).

  • Vấn Đề: Ngay cả sau khi hoàn thành code, các kỹ sư cần một bộ mắt mới không bị thiên vị để tìm ra các lỗi khó.
  • Giải Pháp: Slash Review: Trước khi gửi PR, Daniel sử dụng lệnh /review trong CLI.
    • Anh chọn duyệt so với nhánh gốc (base branch), tương tự như một PR.
    • GPT-5 Codex bắt đầu luồng review chuyên biệt: Nó nghiên cứu sâu các tập tin, tìm kiếm các lỗi kỹ thuật, và thậm chí viết/chạy các script kiểm tra để xác minh các giả thuyết lỗi trước khi báo cáo.
    • Mô hình thiên vị: Luồng review chạy trong một luồng riêng biệt, có bối cảnh mới mẻ (fresh context), loại bỏ bất kỳ thiên vị triển khai (implementation bias) nào từ cuộc trò chuyện trước.
  • Vòng Lặp Sửa Lỗi: Khi Codex tìm thấy một vấn đề P0/P1, Daniel chỉ cần gõ “Please fix”.
  • Kết Quả: Codex sửa lỗi, và Daniel có thể chạy /review lần nữa cho đến khi nhận được “thumbs up” (chấp thuận) cuối cùng. Điều này đảm bảo code được kiểm tra kỹ lưỡng, được sửa lỗi cục bộ trước khi push, tiết kiệm thời gian và đảm bảo độ tin cậy cao hơn.

 

Ba chức năng chính của Codex, được nhấn mạnh trong bài thuyết trình, là:

  1. Lập Trình Đôi và Triển Khai Code (Implementation & Delegation):
    • Codex hoạt động như một đồng đội lập trình đôi trong IDE/CLI, giúp bạn viết code nhanh hơn.
    • Nó có thể nhận ủy quyền (delegate) các tác vụ lớn hơn (như refactor hoặc thêm tính năng) và tự thực hiện trong môi trường Cloud/Sandboxing, bao gồm cả việc tự thiết lập dependencies và chạy song song.
  2. Xác Minh và Kiểm Thử Tự Động (Verification & TDD):
    • Codex tích hợp sâu với quy trình Test-Driven Development (TDD).
    • Nó không chỉ viết code mà còn tự động chạy các bài kiểm thử (unit tests) và xác minh đa phương thức (ví dụ: tạo và kiểm tra snapshot UI) để đảm bảo code hoạt động chính xác và đạt độ hoàn hảo về mặt hình ảnh (pixel perfect).
  3. Review Code Độ Tin Cậy Cao (High-Signal Code Review):
    • Sử dụng mô hình GPT-5 Codex được tinh chỉnh, nó thực hiện review code cực kỳ kỹ lưỡng (ultra thorough) trên GitHub PR hoặc cục bộ thông qua lệnh /review.
    • Chức năng này giúp tìm ra các lỗi kỹ thuật khó và có thể được sử dụng trong vòng lặp Review -> Fix -> Review để đảm bảo chất lượng code trước khi merge, tiết kiệm thời gian và tăng độ tin cậy khi triển khai.

Link video: https://www.youtube.com/watch?v=Gr41tYOzE20

AgentKit vs Dify: A Comprehensive Analysis for AI Agent Development

I. Introduction

In the rapidly evolving landscape of AI agent development, two prominent platforms have emerged as key players: AgentKit by OpenAI and Dify as an open-source alternative. This comprehensive analysis explores their capabilities, differences, and use cases to help developers and businesses make informed decisions.

II. What is AgentKit?

AgentKit is OpenAI’s comprehensive toolkit for building AI agents, designed to provide developers with the tools and infrastructure needed to create sophisticated AI-powered applications. It represents OpenAI’s vision for the future of AI agent development, offering both foundational components and advanced capabilities.

Core Components

  • Agent Builder: Visual interface for creating and configuring AI agents
  • ChatKit: Pre-built chat interfaces and conversation management
  • Connector Registry: Library of pre-built integrations with popular services
  • Evals: Comprehensive evaluation framework for testing agent performance
  • Guardrails: Safety and compliance tools for production deployments

III. What is Dify?

Dify is an open-source platform that enables users to build AI applications without extensive coding knowledge. It focuses on providing a visual, user-friendly interface for creating AI-powered workflows and applications.

Key Features

  • Visual Workflow Builder: Drag-and-drop interface for creating AI workflows
  • Multi-Model Support: Integration with various AI models and providers
  • Template Library: Pre-built templates for common use cases
  • API Management: RESTful APIs for integration

IV. Detailed Comparison: AgentKit vs Dify

Feature AgentKit Dify
Target Audience Developers & Enterprises Non-technical users & Startups
Learning Curve Steep (requires coding knowledge) Gentle (visual interface)
Customization Level High (full code control) Medium (template-based)
Integration Depth Deep API integration Surface-level integration
Scalability Enterprise-grade Small to medium projects
Cost Model Usage-based pricing Open-source + hosting costs
Support Enterprise support Community-driven
Deployment Cloud-first Self-hosted or cloud
Security Built-in enterprise security Basic security features
Performance Optimized for production Suitable for prototyping

Table 1: Feature Comparison Overview

V. Technical Implementation Comparison

Architecture and Deployment

Aspect AgentKit Dify
Architecture Microservices, cloud-native Monolithic, containerized
Deployment OpenAI cloud platform Self-hosted or cloud
Scaling Auto-scaling, enterprise-grade Manual scaling, limited
Monitoring Advanced analytics and logging Basic monitoring
Backup Automated, enterprise backup Manual backup solutions

Table 2: Architecture and Deployment Comparison

Security and Compliance

Security Feature AgentKit Dify
Authentication Enterprise SSO, MFA Basic auth, OAuth
Data Encryption End-to-end encryption Basic encryption
Compliance SOC 2, GDPR, HIPAA Basic compliance
Audit Logging Comprehensive audit trails Limited logging
Access Control Role-based, fine-grained Basic permission system

Table 3: Security and Compliance Comparison

Performance and Optimization

Metric AgentKit Dify
Response Time < 100ms (optimized) 200-500ms (standard)
Throughput 10,000+ requests/second 1,000 requests/second
Concurrent Users Unlimited (auto-scaling) Limited by infrastructure
Uptime 99.9% SLA Depends on hosting
Caching Advanced caching strategies Basic caching

Table 4: Performance and Optimization Comparison

VI. Cost and ROI Analysis

AgentKit Cost Analysis

Initial Costs

  • Setup and configuration: $5,000 – $15,000 USD
  • Team training: $10,000 – $25,000 USD
  • Integration development: $20,000 – $50,000 USD

Monthly Operating Costs

  • API usage: $0.01 – $0.10 USD per request
  • Enterprise support: $2,000 – $10,000 USD/month
  • Infrastructure: $1,000 – $5,000 USD/month

ROI Timeline: 6-12 months for enterprise projects

Dify Cost Analysis

Initial Costs

  • Setup: $0 USD (open source)
  • Basic configuration: $500 – $2,000 USD
  • Custom development: $2,000 – $10,000 USD

Monthly Operating Costs

  • Hosting: $100 – $1,000 USD/month
  • Maintenance: $500 – $2,000 USD/month
  • Support: Community-based (free)

ROI Timeline: 1-3 months for small projects

VII. Getting Started (Terminal Walkthrough)

The following screenshots demonstrate the complete setup process from scratch, showing each terminal command and its output for easy replication.

Step 1 — Clone the repository

Shows the git clone command downloading the AgentKit sample repository from GitHub with progress indicators and completion status.

Step 2 — Install dependencies

Displays the npm install process installing required packages (openai, express, cors, dotenv) with dependency resolution and warnings about Node.js version compatibility.

Step 3 — Configure environment (.env)

Demonstrates creating the .env file with environment variables including OPENAI_API_KEY placeholder and PORT configuration.

Step 4 — Run the server

Shows the server startup process with success messages indicating the AgentKit sample server is running on localhost:3000 with available agents and tools.

Step 5 — Verify health endpoint

Displays the API health check response using PowerShell’s Invoke-WebRequest command, showing successful connection and server status.

Step 6 — Verify port (optional)

Shows netstat command output confirming port 3000 is listening and ready to accept connections.

VIII. Demo Application Features

The following screenshots showcase the key features of our AgentKit sample application, demonstrating its capabilities and user interface.

Main Interface

Shows the main application interface with agent selection dropdown, tools toggle, chat messages area, and input section with modern gradient design.

Agent Switching

Demonstrates switching between different agent types (General, Coding, Creative) with dynamic response styles and specialized capabilities.

Tool Integration

Shows the calculator tool in action, displaying mathematical calculations with formatted results and tool usage indicators.

Conversation Memory

Illustrates conversation history and context awareness, showing how the agent remembers previous interactions and maintains coherent dialogue.

Mobile Responsive

Displays the mobile-optimized interface with responsive design, touch-friendly controls, and adaptive layout for smaller screens.

Error Handling

Shows graceful error handling with user-friendly error messages, retry options, and fallback responses for failed requests.

IX. Conclusion

Key Takeaways

  • AgentKit is ideal for enterprise applications requiring high performance, security, and scalability
  • Dify is perfect for rapid prototyping, small projects, and teams with limited technical expertise
  • Both platforms have their place in the AI development ecosystem
  • Choose based on your specific requirements, team capabilities, and budget constraints

The choice between AgentKit and Dify ultimately depends on your specific needs, team capabilities, and project requirements. AgentKit offers enterprise-grade capabilities for complex, scalable applications, while Dify provides an accessible platform for rapid development and prototyping.

As the AI agent development landscape continues to evolve, both platforms will likely see significant improvements and new features. Staying informed about their capabilities and roadmaps will help you make the best decision for your projects.

This analysis provides a comprehensive overview to help you choose the right platform for your AI agent development needs. Consider your specific requirements, team capabilities, and long-term goals when making your decision.

 

GPT-5: A Quantum Leap in Artificial Intelligence

OpenAI officially launched GPT-5, the most advanced model in its history. This wasn’t just a routine upgrade—it represented a bold leap toward a unified AI system capable of adapting seamlessly between fast, lightweight responses and deep, expert-level reasoning. With GPT-5, OpenAI introduced a model that could dynamically route between different reasoning modes, process multimodal inputs, and deliver results that rival (or even surpass) human experts in areas like coding, healthcare, mathematics, and complex reasoning.

1. From GPT-1 to GPT-5: The Rise of Smarter, Safer, and More Human AI

When OpenAI introduced GPT-1 in 2018, it was a relatively small model with 117 million parameters, capable only of handling basic natural language tasks. Yet, it planted the seed for what would later become a technological revolution.

In 2019, GPT-2 took a giant leap forward. With 1.5 billion parameters, it could generate surprisingly coherent and contextually relevant text. At that time, the public release was even delayed due to concerns over misuse—a sign of how powerful it was compared to what existed before.

Evolution of GPT Models

Then came GPT-3 (2020) with 175 billion parameters. This version made AI accessible to the world. From writing essays, generating code, to assisting in creative tasks, GPT-3 became the first version that truly entered daily workflows. It also laid the foundation for the rise of ChatGPT, which quickly became a household name.

By 2023, GPT-4 introduced multimodal capabilities—understanding not just text but also images, and later, even audio. This turned ChatGPT into a versatile tool: analyzing documents, describing pictures, and holding voice conversations. GPT-4 became the standard for AI in business, education, and creative industries.

In August 2025, OpenAI unveiled GPT-5, marking the next big chapter in this evolution. This wasn’t just a routine upgrade—it represented a bold leap toward a unified AI system capable of adapting seamlessly between fast, lightweight responses and deep, expert-level reasoning.

With GPT-5, OpenAI introduced a model that could dynamically route between different reasoning modes, process multimodal inputs, and deliver results that rival (or even surpass) human experts in areas like coding, healthcare, mathematics, and complex reasoning.

Unlike earlier generations where users had to choose between models (e.g., GPT-4 Turbo, GPT-4o, etc.), GPT-5 introduces a unified architecture:

  • Fast, efficient models for everyday, lightweight tasks.

  • Deep reasoning “thinking” models for complex queries requiring logical, multi-step analysis.

  • A real-time router that automatically determines which model (and reasoning mode) to invoke, based on query complexity, user intent, and even explicit instructions in the prompt like “think deeply about this.”

The user no longer has to make the choice—the model adapts dynamically, delivering both speed and quality without sacrificing one for the other.

GPT-5 handles more than just text. It processes images, code, structured data, and in some cases audio and video, depending on the platform and API integration. Early reports indicate GPT-5 can work with extremely large context windows—up to 1 million tokens—allowing it to analyze entire books, long meeting transcripts, or massive codebases in one go.

This makes GPT-5 especially valuable in fields that rely on long-form reasoning: research, law, education, and enterprise knowledge management.

2. Key Performance Gains

2.1. Coding and Software Development

GPT-5 achieves state-of-the-art results in software development tasks. It not only writes accurate code but also explains design decisions, reviews existing codebases, and suggests improvements. With larger context windows, developers can now feed entire repositories for refactoring or bug-fixing at once. This drastically reduces development cycles.

GPT-5 sets new records across programming tasks:

  • 74.9% on SWE-Bench Verified (up from GPT-4’s ~49%).

  • 88% on Aider Polyglot multi-language coding benchmark.

Developers using tools like Cursor, Windsurf, and Vercel AI SDK report GPT-5 is more “intuitive, coachable, and reliable” in generating, refactoring, and debugging code.

Developers now have more fine-grained control over outputs with new API parameters:

  • verbosity (low, medium, high) – adjust response length and detail

  • reasoning_effort (minimal, low, medium, high) – choose between deep reasoning or faster execution

Additionally, GPT-5 introduces custom tools that accept plain-text input instead of JSON and supports context-free grammar (CFG) constraints for structured outputs.

GPT-5 comes in multiple sizes via API—gpt-5, gpt-5-mini, and gpt-5-nano—allowing developers to balance performance, cost, and latency. There’s also a gpt-5-chat-latest variant (without reasoning) available in both ChatGPT and the API.

Compared to prior models, GPT-5 is more reliable in developer environments. It makes fewer errors, communicates its capabilities more honestly, and produces safer, more useful outputs.

2.2. Enterprise Integration

In enterprises, GPT-5 can summarize thousands of documents, generate compliance reports, or extract insights from structured and unstructured data. Early adopters report that tasks which took hours of manual effort can now be completed in minutes, enabling employees to focus on higher-value work.

Large organizations—including Amgen, BNY, California State University, Figma, Intercom, Lowe’s, Morgan Stanley, SoftBank, and T-Mobile—are integrating GPT-5 into workflows. The model helps reduce bottlenecks, automate repetitive knowledge tasks, and enable rapid analysis across documents, datasets, and customer interactions.

GPT-5 powers conversational agents that handle millions of customer queries with higher accuracy and empathy. It adapts tone based on context, offering professional responses for business and more casual ones for retail or lifestyle brands. Companies using GPT-5 in customer support have reported reduced ticket backlog and improved satisfaction scores.

2.3. Reduced Hallucinations

One of the biggest leaps is GPT-5’s dramatic reduction in hallucinations. Compared to GPT-4, the model is far less likely to invent citations, fabricate data, or misinterpret instructions.

Instead of flat refusals for sensitive queries, GPT-5 provides “safe completions”: careful, measured answers that maintain compliance without leaving the user frustrated.

2.4. Personalized Interaction

GPT-5 offers multiple interaction “modes”:

  • Fast — lightweight, quick responses.

  • Thinking — deliberate, structured, multi-step reasoning.

  • Pro — research-oriented responses at near-expert level.

In ChatGPT, OpenAI even added personalities like “Cynic,” “Listener,” and “Nerd,” allowing the model to engage in different tones and styles depending on the user’s preference.

2.5. Pricing and Access

  • Free users: GPT-5 is available with usage limits.

  • ChatGPT Plus ($20/month): expanded usage, including access to the reasoning modes.

  • ChatGPT Pro ($200/month): unlimited access to GPT-5 Pro, designed for heavy workloads like enterprise analytics, R&D, and coding at scale.

This tiered system allows accessibility for casual users while scaling to professional and enterprise needs.


3. Real-World Applications

3.1. Education and Research

GPT-5 introduces a “Study Mode” that helps students and educators plan lessons, explain complex concepts, and generate research outlines. Its expanded context window allows it to analyze large syllabi, research papers, or even historical archives in a single conversation.

It’s no exaggeration to say GPT-5 could become a “personal tutor at scale.”

3.2. Agentic Tasks

The model is designed for agent-like behavior: it can manage email, interact with Google Calendar, or execute workflows by connecting with external tools. Platforms like Botpress have already integrated GPT-5 to enable no-code AI agent creation, allowing businesses to deploy assistants without technical expertise.

3.3. Healthcare

On medical and scientific tasks, GPT-5 demonstrates expert-level reasoning. It can read radiology scans, summarize clinical guidelines, and even assist in drug discovery by analyzing molecular data. Compared to earlier models, GPT-5 shows fewer critical errors, making it more reliable as a decision-support system.

On medical benchmarks like MedQA, MedXpertQA, USMLE, and VQA-RAD, GPT-5 outperforms human experts and earlier models. It can analyze radiology images, provide diagnostic reasoning, and summarize clinical guidelines—all while adhering to strict safety and compliance protocols.

For the first time, an AI system is showing signs of being a trustworthy medical co-pilot.

4. Market Feedback

The launch of GPT-5 received significant attention across industries. While many praised its performance in technical benchmarks and enterprise adoption, some users noted that the model initially felt more “robotic” and less personable compared to GPT-4o. This created mixed impressions during the first weeks after release.

Among developers, GPT-5 was widely embraced thanks to its larger context window, reduced hallucinations, and flexible reasoning modes. Many open-source projects and AI startups quickly integrated it into workflows, citing massive productivity gains. However, some developers raised concerns about increased API costs when using higher reasoning levels.

Enterprises have been particularly positive, with companies like Microsoft and Oracle integrating GPT-5 into their flagship products. Reports indicate that customer support efficiency improved, compliance reporting became faster, and analytics workloads were streamlined. For many organizations, GPT-5 is now seen as a strategic investment in AI transformation.

For everyday users, GPT-5 was received with both excitement and skepticism. Many appreciated the deeper reasoning in education, coding help, and creative writing. Still, some preferred GPT-4o’s warmth and conversational style, pushing OpenAI to update GPT-5 with improved “human-like” interaction over time.

4.1. Positive Reception

  • Expert-level reasoning: Sam Altman described GPT-5 as “PhD-level expert intelligence.

  • Smooth UX: Reviewers compare GPT-5’s unified routing to the iPhone’s Retina display moment—a breakthrough that users didn’t know they needed until they experienced it.

4.2. Constructive Criticism

  • Some users feel GPT-5 lacks warmth and personality compared to GPT-4o, which had more conversational charm.

  • Others argue it’s an incremental upgrade rather than a radical breakthrough in creativity—especially in literature and artistic writing, where rivals like Anthropic’s Claude 4 show more flair.

  • The rollout faced hiccups: early bugs, occasional routing failures, and inconsistent access for some users created frustration.

5. The Road Ahead

GPT-5 is not the end, but a milestone. OpenAI has already signaled that work on GPT-6 and other specialized models is underway. The focus will likely be on deeper reasoning, multimodal integration across video, audio, and sensor data, and even more robust safeguards for safety and alignment.

For all its raw power, GPT-5 still struggles with emotional tone and creativity. Users want AI that feels alive and empathetic, not just efficient. The future may lie in combining reasoning with emotional intelligence.

Currently, GPT-5 does not “learn in real-time.” Updating its knowledge requires retraining, limiting its ability to adapt instantly. The next frontier for AGI will be continuous, safe online learning.

OpenAI faces rivals like Anthropic’s Claude 4, xAI’s Grok 4 Heavy, and Google DeepMind’s Gemini Ultra. To stay ahead, GPT-5 must balance cost, speed, creativity, and safety while expanding real-world impact.

6. Conclusion

GPT-5 isn’t just another model—it’s a system: fast when needed, deeply analytical when required, and adaptive across tasks from coding to healthcare. It marks OpenAI’s boldest move yet toward AGI.

But technology alone won’t decide GPT-5’s success. The real test lies in whether users feel trust, warmth, and creativity in their interactions. For AI to truly integrate into daily life, it must not only think like an expert but also connect like a human.

In the coming months and years, GPT-5 may well become the invisible engine powering education, business, and healthcare. And if OpenAI succeeds in blending intelligence with empathy, GPT-5 could be remembered as the moment AI became not just useful—but indispensable.

PaperBench: A Benchmark for Evaluating AI’s Ability to Replicate AI Research

In the rapidly evolving world of artificial intelligence (AI), the ability to push the boundaries of scientific discovery is a tantalizing prospect. Imagine an AI system that can not only understand complex research papers but also replicate their experiments with precision, paving the way for faster scientific progress. This vision is at the heart of PaperBench, a groundbreaking benchmark introduced by OpenAI to evaluate AI’s capability to replicate advanced machine learning (ML) research. Published on April 2, 2025, the PaperBench paper (accessible here) presents a rigorous framework for testing AI agents in a task that challenges even seasoned human researchers: reproducing the results of cutting-edge ML papers. In this blog, we’ll dive deep into the PaperBench framework, explore its implications, analyze its results, and discuss its potential to shape the future of AI-driven research.

The Structure of PaperBench

To create a robust and fair evaluation framework, PaperBench is meticulously designed with several key components:

1. Dataset: 20 ICML 2024 Papers

The benchmark is built around 20 papers from ICML 2024, chosen for their complexity and significance. These papers cover a wide range of ML topics, ensuring that AI agents are tested on diverse challenges. Each paper comes with a detailed evaluation rubric, developed in collaboration with the original authors to ensure accuracy. These rubrics break down the replication process into specific tasks, making it possible to evaluate AI performance systematically.

The dataset is massive, comprising 8,316 fine-grained tasks (referred to as leaf nodes) across the 20 papers. Each task represents a concrete requirement, such as implementing a specific algorithm, tuning a hyperparameter, or achieving a particular performance metric. This granular approach allows for precise assessment while reflecting the multifaceted nature of research replication.

2. Hierarchical Evaluation

PaperBench organizes tasks into a hierarchical tree structure. At the top level, tasks are broad (e.g., “reproduce the main experiment”). These are broken down into smaller, weighted subtasks, with the smallest units (leaf nodes) being specific and verifiable within 15 minutes by an expert. Weights reflect the importance of each task to the overall replication, ensuring that critical components contribute more to the final score.

The scoring system aggregates performance across all tasks, providing a single percentage score that indicates how closely the AI’s replication matches the original paper. This structure balances granularity with practicality, making PaperBench both comprehensive and manageable.

3. Competition Rules

To ensure a fair and realistic evaluation, PaperBench imposes strict rules:

  • No Access to Author Code: AI agents cannot use the authors’ code repositories or publicly available implementations (listed in a blocklist). This forces the AI to rely on the paper’s text and its own reasoning.

  • Internet Access Allowed: Agents can search the web for background information or reference materials, mimicking how human researchers work.

  • Submission Requirements: Each AI must submit a code repository with a reproduce.sh script that automates the replication process, including code execution and result generation.

These rules strike a balance between realism and rigor, ensuring that AI agents are tested on their ability to independently interpret and implement research.

4. SimpleJudge: Automated Evaluation

Manually evaluating AI submissions for 20 papers would be prohibitively time-consuming, requiring tens of hours per paper. To address this, OpenAI developed SimpleJudge, an automated evaluation system powered by their o3-mini model. SimpleJudge assesses each leaf node based on the AI’s submitted code and results, producing a score for every task. The system is cost-effective, with an estimated cost of $66 per paper evaluation.

To validate SimpleJudge’s accuracy, OpenAI created JudgeEval, a secondary benchmark that compares SimpleJudge’s scores to human judgments. This ensures that the automated system aligns closely with expert evaluations, maintaining the benchmark’s reliability.

Workflow of PaperBench

PaperBench x1

To better illustrate the PaperBench evaluation process, Figure 1 provides a visual overview of how an AI agent interacts with the benchmark to replicate a research paper. The figure is divided into four main sections, each representing a critical step in the workflow:

  1. Task Setup: The AI agent is given a research paper along with a grading rubric. The rubric outlines the specific criteria required for a successful replication of the paper’s contributions.
  2. Agent Submission: The AI agent creates a codebase from scratch as its submission. This codebase is intended to replicate the empirical results of the research paper.
  3. Reproduction Phase: The submitted codebase is executed in a clean environment to verify whether it reproduces the results reported in the paper. This ensures that the outputs are genuinely generated by the agent’s code and not hard-coded.
  4. Grading: The results of the reproduction phase are graded against the rubric by an LLM-based judge. The judge evaluates the submission based on predefined criteria, such as result accuracy, execution correctness, and code implementation quality.
  5. Final Score: The AI agent’s performance is summarized as a replication score, which reflects how well it met the rubric’s requirements.

Results from PaperBench

OpenAI tested PaperBench on several state-of-the-art AI models, including GPT-4o, o1, o3-mini, DeepSeek-R1, Claude 3.5 Sonnet (New), and Gemini 2.0 Flash. The results provide a fascinating glimpse into the strengths and limitations of current AI systems.

Key Findings

  • Top Performer: Claude 3.5 Sonnet (New): With an open-source framework, this model achieved the highest average score of 21.0% across the 20 papers. While impressive, this score underscores the difficulty of the task, as even the best AI fell far short of perfect replication.

  • Human Baseline: In a controlled experiment on a subset of three papers, PhD-level ML researchers scored an average of 41.4% after 48 hours of work, compared to 26.6% for GPT-4 (o1). This gap highlights that humans still outperform AI in complex research tasks, largely due to their ability to handle ambiguity and leverage domain expertise.

  • PaperBench Code-Dev: In a simplified version of the benchmark that focuses only on code development (without requiring experiment execution), GPT-4 scored 43.4%, approaching human performance. This suggests that AI excels at coding but struggles with the full replication pipeline, particularly in executing and validating experiments.

Analysis

The relatively low scores (even for the top-performing Claude 3.5 Sonnet) reflect the inherent challenges of PaperBench. Research papers often lack explicit details about implementation, requiring the AI to make educated guesses or infer missing information. Humans, with their extensive training and intuition, are better equipped to navigate these gaps. For AI, tasks like hyperparameter tuning, debugging complex code, or interpreting vague experimental descriptions proved particularly difficult.

The results also highlight the importance of the full replication pipeline. While AI models performed well in code development (as seen in the Code-Dev variant), their ability to execute experiments and achieve the reported results lagged behind. This suggests that future improvements in AI reasoning and experimental design will be critical for closing the gap with human researchers.

The Broader Implications of PaperBench

PaperBench is more than just a benchmark—it’s a catalyst for advancing AI’s role in scientific discovery. Its implications are far-reaching, touching on research, education, and industry.

1. Measuring AI Progress

By providing a standardized, challenging task, PaperBench serves as a yardstick for tracking AI’s progress in research automation. As models improve, their scores on PaperBench will reflect advancements in reasoning, coding, and scientific understanding. This could guide the development of AI systems tailored for research applications.

2. Accelerating Science

If AI can reliably replicate research, it could transform the scientific process. Reproducibility is a persistent challenge in ML and other fields, with many studies failing to replicate due to incomplete documentation or errors. AI agents that excel at replication could verify findings, identify discrepancies, and accelerate the validation of new discoveries.

3. Open-Source Collaboration

The open-source release of PaperBench on GitHub encourages the global research community to contribute new papers, refine evaluation rubrics, and develop better AI agents. This collaborative approach ensures that the benchmark evolves with the field, remaining relevant as ML research advances.

4. Educational Potential

PaperBench could also serve as a learning tool for students and early-career researchers. By studying the rubrics and attempting to replicate papers, they can gain hands-on experience with cutting-edge ML techniques. AI agents could assist by generating initial code or highlighting key steps, making the learning process more accessible.

Challenges and Future Directions

Despite its strengths, PaperBench faces several challenges that OpenAI acknowledges in the paper:

1. Scalability

Creating evaluation rubrics for each paper is labor-intensive, requiring weeks of collaboration with authors. Scaling PaperBench to include hundreds or thousands of papers would be a logistical challenge. Future work could explore automated rubric generation or simplified evaluation frameworks to address this.

2. Dependence on Paper Quality

The success of replication depends on the clarity and completeness of the original paper. If a paper omits critical details (a common issue in ML research), even the best AI or human researcher may struggle to reproduce the results. PaperBench could inspire the ML community to adopt more transparent reporting practices.

3. Cost of Evaluation

While SimpleJudge reduces the time and cost of evaluation, assessing thousands of tasks across multiple papers is still resource-intensive. Optimizing SimpleJudge or developing alternative evaluation methods could make PaperBench more accessible to smaller research groups.

4. Expanding Beyond ML

Currently, PaperBench focuses on ML research, but its framework could be adapted to other fields like physics, biology, or chemistry. Expanding the benchmark to these domains would broaden its impact and test AI’s versatility in scientific replication.

Future Directions

OpenAI outlines several exciting possibilities for PaperBench’s evolution:

  • Simplified Variants: Developing lighter versions like PaperBench Code-Dev to reduce evaluation costs and broaden accessibility.

  • Cross-Disciplinary Benchmarks: Extending the framework to other scientific disciplines, creating a universal standard for AI-driven research.

  • Improved AI Agents: Using PaperBench to train specialized AI models that excel at research tasks, potentially integrating with tools like code interpreters or experiment planners.

  • Community-Driven Growth: Encouraging researchers to contribute new papers and rubrics, ensuring that PaperBench remains a dynamic and relevant resource.

Conclusion: A Step Toward Autonomous Research

PaperBench is a bold and ambitious effort to test AI’s potential as a research partner. Its results—while showing that AI is not yet on par with human researchers—demonstrate significant progress and highlight clear areas for improvement. With Claude 3.5 Sonnet achieving a 21.0% score and humans at 41.4%, the gap is substantial but not insurmountable. As AI models become more adept at reasoning, coding, and experimental design, their performance on PaperBench will improve, bringing us closer to a future where AI can independently drive scientific breakthroughs.

For researchers, PaperBench offers a powerful tool to evaluate and refine AI systems. For the broader scientific community, it promises to accelerate discovery by automating one of the most challenging aspects of research: replication. And for students and enthusiasts, it provides a window into the cutting edge of ML, with open-source resources to explore and learn from.

As we look to the future, PaperBench stands as a testament to the potential of AI to transform science. It’s a reminder that while the journey to autonomous research is complex, each step forward brings us closer to a world where AI and humans collaborate seamlessly to unravel the mysteries of the universe.