Playwright Agents — 🎭 Planner, 🎭 Generator, 🎭 Healer

What are Playwright Agents?

This article distills the official guidance and demo video into a practical, production‑ready walkthrough. Playwright ships three agents you can run independently or in a loop: 🎭 Planner, 🎭 Generator, and 🎭 Healer.

🎭 Planner

Explores your app and produces a human‑readable Markdown plan.

  • Input: a clear request (e.g. “Generate a plan for guest checkout”), a seed test, optional PRD.
  • Output: specs/*.md with scenarios, steps, and expected results.

🎭 Generator

Converts the Markdown plan into executable Playwright tests and validates selectors/assertions during generation.

  • Input: Markdown from specs/, seed test and fixtures.
  • Output: tests/*.spec.ts aligned to the plan.

🎭 Healer

Runs tests, replays failures, proposes patches (locator updates, waits, data fixes) and re‑runs until passing or guardrails stop.

  • Input: failing test name.
  • Output: a passing test or a skipped test if functionality is broken.
🎭 Planner → 🎭 Generator → 🎭 Healer Overview

1. Requirements

  • Node.js 18+ and npm
  • Playwright Test latest version
  • VS Code 1.105+ (Insiders channel) for full agentic UI experience
  • AI Assistant – Choose one: Claude Code, OpenCode, or VS Code with AI extensions
  • Git for version control
  • Modern web browser (Chrome, Firefox, Safari)

2. Step-by-Step Installation Guide

Step 1: Prerequisites

  • Install Node.js 18+ from nodejs.org
  • Install npm (comes with Node.js)
  • Install VS Code 1.105+ from VS Code Insiders for agentic experience
  • Choose and install an AI Assistant:
    • Claude Code – for Claude integration
    • OpenCode – for OpenAI integration
    • VS Code with AI extensions – for built-in AI features
  • Install Git for version control

Step 2: Navigate to Demo Directory

# Navigate to the demo directory
C:\Users\ADMIN\Documents\AI_QUEST_LTP> cd "playwright Agent Test Example - PhatLT"

Step 3: Install Dependencies

playwright Agent Test Example - PhatLT> npm install
playwright Agent Test Example - PhatLT> npx playwright install

Step 4: Initialize Playwright Agents

# Initialize agent definitions for Claude Code (recommended)
playwright Agent Test Example - PhatLT> npx playwright init-agents --loop=claude

# Or for VS Code
playwright Agent Test Example - PhatLT> npx playwright init-agents --loop=vscode

# Or for OpenCode
playwright Agent Test Example - PhatLT> npx playwright init-agents --loop=opencode

Step 5: Verify Setup

# Test seed file
playwright Agent Test Example - PhatLT> npx playwright test tests/seed-agents.spec.ts

# Check project structure
playwright Agent Test Example - PhatLT> dir .claude\agents
playwright Agent Test Example - PhatLT> dir .github
playwright Agent Test Example - PhatLT> dir specs
playwright Agent Test Example - PhatLT> npm init -y
Wrote to playwright Agent Test Example - PhatLT\package.json:
{
  "name": "phatlt-playwright",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "playwright test",
    "test:headed": "playwright test --headed",
    "test:ui": "playwright test --ui",
    "test:debug": "playwright test --debug",
    "test:chromium": "playwright test --project=chromium",
    "test:firefox": "playwright test --project=firefox",
    "test:webkit": "playwright test --project=webkit",
    "report": "playwright show-report",
    "codegen": "playwright codegen"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "type": "commonjs",
  "description": "",
  "devDependencies": {
    "@playwright/test": "^1.56.0",
    "@types/node": "^24.7.2"
  }
}

playwright Agent Test Example - PhatLT> npm install -D @playwright/test
added 1 package, and audited 2 packages in 2s
found 0 vulnerabilities

playwright Agent Test Example - PhatLT> npx playwright install
Installing browsers...
✓ Chromium 120.0.6099.109
✓ Firefox 120.0
✓ WebKit 17.4

playwright Agent Test Example - PhatLT> npx playwright init
✓ Created playwright.config.ts
✓ Created tests/
✓ Created tests/example.spec.ts
✓ Created tests/seed.spec.ts

3. Step-by-Step Testing Guide

Step 1: Test Seed File

Run the seed test to verify Playwright Agents setup:

# Test seed file for agents
playwright Agent Test Example - PhatLT> npx playwright test tests/seed-agents.spec.ts

# Run with browser UI visible
playwright Agent Test Example - PhatLT> npx playwright test tests/seed-agents.spec.ts --headed

# Run in debug mode
playwright Agent Test Example - PhatLT> npx playwright test tests/seed-agents.spec.ts --debug

Step 2: Test Generated Tests

Run the example generated tests from the Generator agent:

# Run generated Google search tests
playwright Agent Test Example - PhatLT> npx playwright test tests/google-search-generated.spec.ts

# Run specific test by name
playwright Agent Test Example - PhatLT> npx playwright test --grep "Perform Basic Search"

# Run all tests
playwright Agent Test Example - PhatLT> npx playwright test

Step 3: Test Different Browsers

# Run tests only on Chromium
playwright Agent Test Example - PhatLT> npx playwright test --project=chromium

# Run tests only on Firefox
playwright Agent Test Example - PhatLT> npx playwright test --project=firefox

# Run tests only on WebKit
playwright Agent Test Example - PhatLT> npx playwright test --project=webkit

Step 4: Generate Test Reports

# Generate HTML report
playwright Agent Test Example - PhatLT> npx playwright show-report

# Run tests with UI mode
playwright Agent Test Example - PhatLT> npx playwright test --ui

Step 5: Using Playwright Agents

Now you can use the Playwright Agents workflow with Claude Code:

# In Claude Code, ask the Planner:
"I need test scenarios for Google search functionality. Use the planner agent to explore https://www.google.com"

# Then ask the Generator:
"Use the generator agent to create tests from the test plan in specs/"

# Finally, use the Healer if tests fail:
"The test 'Perform Basic Search' is failing. Use the healer agent to fix it."

4. Project Structure and Files

playwright Agent Test Example - PhatLT/
├── .claude/agents/              # Claude Code agent definitions
│   ├── playwright-test-planner.md    # 🎭 Planner agent
│   ├── playwright-test-generator.md  # 🎭 Generator agent
│   └── playwright-test-healer.md     # 🎭 Healer agent
├── .github/                     # Official agent definitions
│   ├── planner.md               # 🎭 Planner instructions
│   ├── generator.md             # 🎭 Generator instructions
│   └── healer.md                # 🎭 Healer instructions
├── specs/                       # Test plans (Markdown)
│   └── google-search-operations.md   # Example test plan
├── tests/                       # Generated tests
│   ├── seed-agents.spec.ts      # Seed test for agents
│   └── google-search-generated.spec.ts  # Generated test example
├── .mcp.json                    # MCP server configuration
├── playwright.config.ts         # Playwright configuration
├── package.json                 # Project dependencies
└── test-results/               # Test execution results

5. How Playwright Agents Work (End‑to‑End)

  1. 🎭 Planner — explores your app and creates human-readable test plans saved in specs/ directory.
  2. 🎭 Generator — transforms Markdown plans into executable Playwright tests in tests/ directory.
  3. 🎭 Healer — automatically repairs failing tests by updating selectors and waits.
  4. Execution — run generated tests with npx playwright test.
  5. Maintenance — Healer fixes issues automatically, keeping tests stable over time.
playwright Agent Test Example - PhatLT> npx playwright test tests/seed-agents.spec.ts

Running 1 test using 1 worker

  ✓ [chromium] › tests/seed-agents.spec.ts › seed (2.1s)

  1 passed (2.1s)

playwright Agent Test Example - PhatLT> npx playwright test tests/google-search-generated.spec.ts

Running 5 tests using 1 worker

  ✓ [chromium] › tests/google-search-generated.spec.ts › Google Search - Basic Operations › Perform Basic Search (3.2s)
  ✓ [chromium] › tests/google-search-generated.spec.ts › Google Search - Basic Operations › Verify Search Box Functionality (1.8s)
  ✓ [chromium] › tests/google-search-generated.spec.ts › Google Search - Basic Operations › Search with Empty Query (1.5s)
  ✓ [chromium] › tests/google-search-generated.spec.ts › Google Search - Results Validation › Verify Search Results Display (4.1s)
  ✓ [chromium] › tests/google-search-generated.spec.ts › Google Search - Results Validation › Navigate Through Search Results (5.3s)

  5 passed (16.0s)

6. How Playwright Agents Work

Playwright Agents follow a structured workflow as described in the official documentation. The process involves three main agents working together:

🎭 Planner Agent

The Planner explores your application and creates human-readable test plans:

  • Input: Clear request (e.g., “Generate a plan for guest checkout”), seed test, optional PRD
  • Output: Markdown test plan saved as specs/basic-operations.md
  • Process: Runs seed test to understand app structure and creates comprehensive test scenarios

🎭 Generator Agent

The Generator transforms Markdown plans into executable Playwright tests:

  • Input: Markdown plan from specs/
  • Output: Test suite under tests/
  • Process: Verifies selectors and assertions live, generates robust test code

🎭 Healer Agent

The Healer automatically repairs failing tests:

  • Input: Failing test name
  • Output: Passing test or skipped test if functionality is broken
  • Process: Replays failing steps, inspects UI, suggests patches, re-runs until passing
// Example: Generated test from specs/basic-operations.md
// spec: specs/basic-operations.md
// seed: tests/seed.spec.ts

import { test, expect } from '../fixtures';

test.describe('Adding New Todos', () => {
  test('Add Valid Todo', async ({ page }) => {
    // 1. Click in the "What needs to be done?" input field
    const todoInput = page.getByRole('textbox', { name: 'What needs to be done?' });
    await todoInput.click();

    // 2. Type "Buy groceries"
    await todoInput.fill('Buy groceries');

    // 3. Press Enter key
    await todoInput.press('Enter');

    // Expected Results:
    // - Todo appears in the list with unchecked checkbox
    await expect(page.getByText('Buy groceries')).toBeVisible();
    const todoCheckbox = page.getByRole('checkbox', { name: 'Toggle Todo' });
    await expect(todoCheckbox).toBeVisible();
    await expect(todoCheckbox).not.toBeChecked();

    // - Counter shows "1 item left"
    await expect(page.getByText('1 item left')).toBeVisible();

    // - Input field is cleared and ready for next entry
    await expect(todoInput).toHaveValue('');
    await expect(todoInput).toBeFocused();

    // - Todo list controls become visible
    await expect(page.getByRole('checkbox', { name: '❯Mark all as complete' })).toBeVisible();
  });
});

7. Agent Deep Dives

🎭 Planner — author plans that generate great tests

  • Goal: Convert product intent into executable, atomic scenarios.
  • Inputs: business request, seed.spec.ts, optional PRD/acceptance criteria.
  • Output quality tips: prefer user‑intent over UI steps, keep 1 scenario = 1 assertion focus, name entities consistently.
  • Anti‑patterns: mixing setup/teardown into steps; over‑specifying selectors in Markdown.

🎭 Generator — compile plans into resilient tests

  • Validates selectors live: uses your running app to confirm locators/assertions.
  • Structure: mirrors specs/*.md; adds fixtures from seed.spec.ts; keeps tests idempotent.
  • Resilience: prefer roles/labels; avoid brittle CSS/XPath; centralize waits.

🎭 Healer — stabilize and protect correctness

  • Scope: flaky selectors, timing, deterministic data; not business‑logic rewrites.
  • Review gates: patches proposed as diffs; you accept/reject before merge.
  • Outcomes: test fixed, or skipped with a documented reason when the feature is broken.

8. Project Structure and Artifacts

Playwright Agents follow a structured approach as described in the official documentation. The generated files follow a simple, auditable structure:

repo/
  .github/                    # agent definitions
    planner.md               # planner agent instructions
    generator.md             # generator agent instructions  
    healer.md                # healer agent instructions
  specs/                     # human-readable test plans
    basic-operations.md      # generated by planner
  tests/                     # generated Playwright tests
    seed.spec.ts             # seed test for environment
    add-valid-todo.spec.ts   # generated by generator
  playwright.config.ts       # Playwright configuration

Agent Definitions (.github/)

Under the hood, agent definitions are collections of instructions and MCP tools provided by Playwright. They should be regenerated whenever Playwright is updated:

# Initialize agent definitions
npx playwright init-agents --loop=vscode
npx playwright init-agents --loop=claude  
npx playwright init-agents --loop=opencode

Specs in specs/

Specs are structured plans describing scenarios in human-readable terms. They include steps, expected outcomes, and data. Specs can start from scratch or extend a seed test.

Tests in tests/

Generated Playwright tests, aligned one-to-one with specs wherever feasible. Generated tests may include initial errors that can be healed automatically by the healer agent.

Seed tests (seed.spec.ts)

Seed tests provide a ready-to-use page context to bootstrap execution. The planner runs this test to execute all initialization necessary for your tests including global setup, project dependencies, and fixtures.

// Example: seed.spec.ts
import { test, expect } from './fixtures';

test('seed', async ({ page }) => {
  // This test uses custom fixtures from ./fixtures
  // 🎭 Planner will run this test to execute all initialization
  // necessary for your tests including global setup, 
  // project dependencies and all necessary fixtures and hooks
});

9. Examples from Official Documentation

🎭 Planner Output Example

The 🎭 Planner generates human-readable test plans saved as specs/basic-operations.md:

# TodoMVC Application - Basic Operations Test Plan

## Application Overview

The TodoMVC application is a React-based todo list manager that demonstrates 
standard todo application functionality. Key features include:

- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos  
- **Filtering System**: View todos by All, Active, or Completed status with URL routing support
- **Real-time Counter**: Display of active (incomplete) todo count
- **Interactive UI**: Hover states, edit-in-place functionality, and responsive design

## Test Scenarios

### 1. Adding New Todos

**Seed:** `tests/seed.spec.ts`

#### 1.1 Add Valid Todo

**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key

**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)

🎭 Generator Output Example

The 🎭 Generator transforms the Markdown plan into executable Playwright tests:

// Generated test from specs/basic-operations.md
// spec: specs/basic-operations.md
// seed: tests/seed.spec.ts

import { test, expect } from '../fixtures';

test.describe('Adding New Todos', () => {
  test('Add Valid Todo', async ({ page }) => {
    // 1. Click in the "What needs to be done?" input field
    const todoInput = page.getByRole('textbox', { name: 'What needs to be done?' });
    await todoInput.click();

    // 2. Type "Buy groceries"
    await todoInput.fill('Buy groceries');

    // 3. Press Enter key
    await todoInput.press('Enter');

    // Expected Results:
    // - Todo appears in the list with unchecked checkbox
    await expect(page.getByText('Buy groceries')).toBeVisible();
    const todoCheckbox = page.getByRole('checkbox', { name: 'Toggle Todo' });
    await expect(todoCheckbox).toBeVisible();
    await expect(todoCheckbox).not.toBeChecked();

    // - Counter shows "1 item left"
    await expect(page.getByText('1 item left')).toBeVisible();

    // - Input field is cleared and ready for next entry
    await expect(todoInput).toHaveValue('');
    await expect(todoInput).toBeFocused();

    // - Todo list controls become visible
    await expect(page.getByRole('checkbox', { name: '❯Mark all as complete' })).toBeVisible();
  });
});

10. Best Practices

  • Keep plans atomic: Small, focused scenarios help 🎭 Generator produce clean tests. Avoid mixing multiple user flows in one scenario.
  • Stabilize with seed: Centralize navigation, authentication, and data seeding in seed.spec.ts to ensure consistent test environment.
  • Prefer semantic selectors: Use getByRole, getByLabel, and getByText for resilient element selection.
  • 🎭 Healer guardrails: Review patches carefully; accept locator/wait tweaks, but avoid broad logic changes that might mask real bugs.
  • Version agent definitions: Commit .github/ changes and regenerate them whenever Playwright is updated.
  • Choose the right AI assistant: VS Code, Claude Code, or OpenCode — pick the one that fits your team’s workflow and preferences.
  • Maintain traceability: Keep clear 1:1 mapping from specs/*.md to tests/*.spec.ts using comments and headers.
  • Test the agents: Start with simple scenarios to understand how each agent works before tackling complex user flows.

11. Troubleshooting

🎭 Planner can’t explore the app

Ensure your app is running locally, seed test works, and the app is accessible. Check that authentication and navigation are properly set up in seed.spec.ts.

🎭 Generator can’t find elements

Run the app locally, ensure routes are correct, and verify that elements have proper roles, labels, or accessible names. The 🎭 Generator validates selectors live against your running app.

🎭 Healer loops without fixing

Set explicit timeouts, add deterministic test data, and reduce flakiness in network waits. The 🎭 Healer works best with stable, predictable test conditions.

AI assistant doesn’t trigger agents

Re-run npx playwright init-agents --loop=[assistant], reload the IDE, and ensure the correct workspace root is open with agent definitions in .github/.

Generated tests fail immediately

Check that your seed test passes first. Ensure the app state matches what the 🎭 Planner observed. Verify that test data and authentication are consistent between planning and execution.

Agent definitions are outdated

Regenerate agent definitions after Playwright updates: npx playwright init-agents --loop=[assistant]. This ensures you have the latest tools and instructions.

12. CI/CD Integration

You can run the same agent‑generated tests in CI. Keep agent definitions in the repo and refresh them on Playwright upgrades.

# .github/workflows/tests.yml (excerpt)
name: Playwright Tests
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: npx playwright install --with-deps
      - run: npx playwright test --reporter=html

13. FAQ

Do I need Claude Code?

No. Playwright Agents work with VS Code (v1.105+), Claude Code, or OpenCode. Choose the AI assistant that fits your team’s workflow and preferences.

Where do test plans live?

In specs/ as Markdown files generated by the 🎭 Planner. Generated tests go to tests/.

What if a feature is actually broken?

The 🎭 Healer can skip tests with an explanation instead of masking a real bug. It distinguishes between flaky tests and genuinely broken functionality.

Can I run agent-generated tests in CI?

Yes. The agents produce standard Playwright tests that run with npx playwright test in CI. Agent definitions are only needed for test authoring, not execution.

How do I update agent definitions?

Run npx playwright init-agents --loop=[assistant] whenever Playwright is updated to get the latest tools and instructions.

What’s the difference between 🎭 Planner, 🎭 Generator, and 🎭 Healer?

🎭 Planner: Explores your app and creates human-readable test plans. 🎭 Generator: Transforms plans into executable Playwright tests. 🎭 Healer: Automatically fixes failing tests by updating selectors and waits.

14. Demo video and Source code

GitHubGitHub repository: phatltscuti/playwright_agents

 

Codex CLI vs Gemini CLI vs Claude Code

1. Codex CLI – Capabilities and New Features

According to OpenAI’s official announcement (“Introducing upgrades to Codex”), Codex CLI has been rebuilt on top of GPT-5-Codex, turning it into an agentic programming assistant — a developer AI that can autonomously plan, reason, and execute tasks across coding environments.

🌟 Core Abilities

  • Handles both small and large tasks: From writing a single function to refactoring entire projects.
  • Cross-platform integration: Works seamlessly across terminal (CLI), IDE (extension), and cloud environments.
  • Task reasoning and autonomy: Can track progress, decompose goals, and manage multi-step operations independently.
  • Secure by design: Runs in a sandbox with explicit permission requests for risky operations.

📈 Performance Highlights

  • Uses 93.7% fewer reasoning tokens for simple tasks, but invests 2× more computation on complex ones.
  • Successfully ran over 7 hours autonomously on long software tasks during testing.
  • Produces more precise code reviews than older Codex versions.

🟢 In short: Codex CLI 2025 is not just a code generator — it’s an intelligent coding agent capable of reasoning, multitasking, and working securely across terminal, IDE, and cloud environments.

2.Codex CLI vs Gemini CLI vs Claude Code: The New Era of AI in the Terminal

The command line has quietly become the next frontier for artificial intelligence.
While graphical AI tools dominate headlines, the real evolution is unfolding inside the terminal — where AI coding assistants now operate directly beside you, as part of your shell workflow.

Three major players define this new space: Codex CLI, Gemini CLI, and Claude Code.
Each represents a different philosophy of how AI should collaborate with developers — from speed and connectivity to reasoning depth. Let’s break down what makes each contender unique, and where they shine.


🧩 Codex CLI — OpenAI’s Code-Focused Terminal Companion

Codex CLI acts as a conversational layer over your terminal.
It listens to natural language commands, interprets your intent, and translates it into executable code or shell operations.
Now powered by OpenAI’s Codex5-Medium, it builds on the strengths of the o4-mini generation while adding adaptive reasoning and a larger 256K-token context window.

Once installed, Codex CLI integrates seamlessly with your local filesystem.
You can type:

“Create a Python script that fetches GitHub issues and logs them daily,”
and watch it instantly scaffold the files, import the right modules, and generate functional code.

Codex CLI supports multiple languages — Python, JavaScript, Go, Rust, and more — and is particularly strong at rapid prototyping and bug fixing.
Its defining trait is speed: responses feel immediate, making it perfect for fast iteration cycles.

Best for: developers who want quick, high-quality code generation and real-time debugging without leaving the terminal.


🌤️ Gemini CLI — Google’s Adaptive Terminal Intelligence

Gemini CLI embodies Google’s broader vision for connected AI development — blending reasoning, utility, and live data access.
Built on Gemini 2.5 Pro, this CLI isn’t just a coding bot — it’s a true multitool for developers and power users alike.

Beyond writing code, Gemini CLI can run shell commands, retrieve live web data, or interface with Google Cloud services.
It’s ideal for workflows that merge coding with external context — for example:

  • fetching live API responses,

  • monitoring real-time metrics,

  • or updating deployment configurations on-the-fly.

Tight integration with VS Code, Google Cloud SDK, and Workspace tools turns Gemini CLI into a full-spectrum AI companion rather than a mere code generator.

Best for: developers seeking a versatile assistant that combines coding intelligence with live, connected utility inside the terminal.


🧠 Claude Code — Anthropic’s Deep Code Reasoner

If Codex is about speed, and Gemini is about connectivity, Claude Code represents depth.
Built on Claude Sonnet 4.5, Anthropic’s upgraded reasoning model, Claude Code is designed to operate as a true engineering collaborator.

It excels at understanding, refactoring, and maintaining large-scale codebases.
Claude Code can read entire repositories, preserve logic across files, and even generate complete pull requests with human-like commit messages.
Its upgraded 250K-token context window allows it to track dependencies, explain architectural patterns, and ensure code consistency over time.

Claude’s replies are more analytical — often including explanations, design alternatives, and justifications for each change.
It trades a bit of speed for a lot more insight and reliability.

Best for: professional engineers or teams managing complex, multi-file projects that demand reasoning, consistency, and full-codebase awareness.

3.Codex CLI vs Gemini CLI vs Claude Code: Hands-on With Two Real Projects

While benchmarks and specs are useful, nothing beats actually putting AI coding agents to work.
To see how they perform on real, practical front-end tasks, I tested three leading terminal assistants — Codex CLI (Codex5-Medium), Gemini CLI (Gemini 2.5 Pro), and Claude Code (Sonnet 4.5) — by asking each to build two classic web projects using only HTML, CSS, and JavaScript.

  • 🎮 Project 1: Snake Game — canvas-based, pixel-style, smooth movement, responsive.

  • Project 2: Todo App — CRUD features, inline editing, filters, localStorage, dark theme, accessibility + keyboard support.

🎮 Task 1 — Snake Game

Goal

Create a playable 2D Snake Game using HTML, CSS, and JavaScript.
Display a grid-based canvas with a moving snake that grows when it eats food.
The snake should move continuously and respond to arrow-key inputs.
The game ends when the snake hits the wall or itself.
Include a score counter and a restart button with pixel-style graphics and responsive design.

Prompt

Create a playable 2D Snake Game using HTML, CSS, and JavaScript.

  The game should display a grid-based canvas with a moving snake that grows when it eats

  food.

  The snake should move continuously and respond to keyboard arrow keys for direction

  changes.

  The game ends when the snake hits the wall or itself.

  Show a score counter and a restart button.

  Use smooth movement, pixel-style graphics, and responsive design for different screen sizes

Observations

Codex CLI — Generated the basic canvas scaffold in seconds. Game loop, input, and scoring worked out of the box, but it required minor tuning for smoother turning and anti-reverse logic.

Gemini CLI — Delivered well-structured, commented code and used requestAnimationFrame properly. Gameplay worked fine, though the UI looked plain — more functional than fun.

Claude Code — Produced modular, production-ready code with solid collision handling, restart logic, and a polished HUD. Slightly slower response but the most complete result overall.

✅ Task 2 — Todo App

Goal

Build a complete, user-friendly Todo List App using only HTML, CSS, and JavaScript (no frameworks).
Features: add/edit/delete tasks, mark complete/incomplete, filter All / Active / Completed, clear completed, persist via localStorage, live counter, dark responsive UI, and full keyboard accessibility (Enter/Space/Delete).
Deliverables: index.html, style.css, app.js — clean, modular, commented, semantic HTML + ARIA.

Prompt

Develop a complete and user-friendly Todo List App using only HTML, CSS, and JavaScript (no frameworks). The app should include the following functionality and design requirements:

    1. Input field and ‘Add’ button to create new tasks.
    2. Ability to mark tasks as complete/incomplete via checkboxes.
    3. Inline editing of tasks by double-clicking — pressing Enter saves changes and Esc cancels.
    4. Delete buttons to remove tasks individually.
    5. Filter controls for All, Active, and Completed tasks.
    6. A ‘Clear Completed’ button to remove all completed tasks at once.
    7. Automatic saving and loading of todos using localStorage.
    8. A live counter showing the number of active (incomplete) tasks.
    9. A modern, responsive dark theme UI using CSS variables, rounded corners, and hover effects.
    10. Keyboard accessibility — Enter to add, Space to toggle, Delete to remove tasks.
      Ensure the project is well structured with three separate files:
    • index.html
    • style.css
    • app.js
      Code should be clean, modular, and commented, with semantic HTML and appropriate ARIA attributes for accessibility.

Observations

Codex CLI — Created a functional 3-file structure with working CRUD, filters, and persistence. Fast, but accessibility and keyboard flows needed manual reminders.

Gemini CLI — Balanced logic and UI nicely. Used CSS variables for a simple dark theme and implemented localStorage properly.
Performance was impressive — Gemini was the fastest overall, but its default design felt utilitarian, almost as if it “just wanted to get the job done.”
Gemini focuses on correctness and functionality rather than visual finesse.

Claude Code — Implemented inline editing, keyboard shortcuts, ARIA live counters, and semantic roles perfectly. The result was polished, responsive, and highly maintainable.

4.Codex CLI vs Gemini CLI vs Claude Code — Real-World Comparison

When testing AI coding assistants, speed isn’t everything — clarity, structure, and the quality of generated code all matter. To see how today’s top command-line tools compare, I ran the same set of projects across Claude Code, Gemini CLI, and Codex CLI, including a 2D Snake Game and a Todo List App.
Here’s how they performed.


Claude Code: Polished and Reliable

Claude Code consistently produced the most professional and complete results.
Its generated code came with clear structure, organized logic, and well-commented sections.
In the Snake Game test, Claude built the best-looking user interface, with a balanced layout, responsive design, and smooth movement logic.
Error handling was handled cleanly, and the overall experience felt refined — something you could hand over to a production team with confidence.
Although it wasn’t the fastest, Claude made up for it with code quality, structure, and ease of prompt engineering.
If your workflow values polish, maintainability, and readability, Claude Code is the most dependable choice.


Gemini CLI: Fastest but Basic

Gemini CLI clearly took the top spot for speed.
It executed quickly, generated files almost instantly, and made iteration cycles shorter.
However, the output itself felt minimal and unrefined — both the UI and the underlying logic were quite basic compared to Claude or Codex.
In the Snake Game task, Gemini produced a playable result but lacked visual polish and consistent structure.
Documentation and comments were also limited.
In short, Gemini is great for rapid prototyping or testing ideas quickly, but not for projects where you need beautiful UI, advanced logic, or long-term maintainability.


Codex CLI: Flexible but Slower

Codex CLI offered good flexibility and handled diverse prompts reasonably well.
It could generate functional UIs with decent styling, somewhere between Gemini’s simplicity and Claude’s refinement.
However, its main drawback was speed — responses were slower, and sometimes additional manual intervention was needed to correct or complete the code.
Codex is still a solid option when you need to tweak results manually or explore multiple implementation approaches, but it doesn’t match Claude’s polish or Gemini’s speed.


Overall Impression

After testing multiple projects, the overall ranking became clear:

  • Gemini CLI is the fastest but produces simple and unpolished code.

  • Claude Code delivers the most reliable, structured, and visually refined results.

  • Codex CLI sits in between — flexible but slower and less cohesive.

Each tool has its strengths. Gemini is ideal for quick builds, Codex for experimentation, and Claude Code for professional, trust-ready outputs.

In short:

Gemini wins on speed. Claude wins on quality. Codex stands in between — flexible but slower.

Anthropic giới thiệu mô hình lập trình đỉnh nhất thế giới Claude Sonnet 4.5

Trong thế giới AI đang thay đổi từng ngày, các mô hình ngôn ngữ lớn (LLM — Large Language Models) không chỉ dừng lại ở khả năng hiểu – sinh văn bản, mà đang tiến sang khả năng tương tác thực tế, thực thi công cụ, duy trì trạng thái lâu, và hỗ trợ tác vụ đa bước. Claude của Anthropic là một trong những cái tên nổi bật nhất trong cuộc đua này — và phiên bản mới nhất Sonnet 4.5 được định vị như một bước nhảy quan trọng.

“Claude Sonnet 4.5 is the best coding model in the world. It’s the strongest model for building complex agents. It’s the best model at using computers.”Anthropic

1. Giới thiệu

Trong vài năm gần đây, các mô hình như GPT (OpenAI), Gemini (Google / DeepMind), Claude (Anthropic) đã trở thành xương sống của nhiều ứng dụng AI trong sản xuất, công việc hàng ngày và nghiên cứu. Nhưng mỗi dòng mô hình đều chọn hướng “cân bằng” giữa sức mạnh và an toàn, giữa khả năng sáng tạo và kiểm soát.

Claude, từ khi xuất hiện, đã xác định con đường của mình: ưu tiên an toàn, khả năng tương tác công cụ (tool use), kiểm soát nội dung xấu. Đặc biệt, dòng Sonnet của Claude được dùng như phiên bản “cân bằng” giữa các mô hình nhẹ hơn và các mô hình cực mạnh (Opus).

Vào ngày 29 tháng 9 năm 2025, Anthropic chính thức ra mắt Claude Sonnet 4.5, phiên bản được quảng bá là mạnh nhất trong dòng Sonnet, và là mô hình kết hợp tốt nhất giữa cấu trúc mã, khả năng dùng máy tính và agent phức tạp.

Thông báo chính thức khẳng định Sonnet 4.5 không chỉ là nâng cấp nhỏ mà là bước tiến lớn: nó cải thiện đáng kể khả năng lập trình, tương tác công cụ, reasoning & toán học, đồng thời giữ chi phí sử dụng không đổi với Sonnet 4 trước đó.

2. Những điểm nổi bật & cải tiến từ thông báo chính thức

2.1 “Most aligned frontier model” — Mô hình tiên phong có alignment cao nhất

Anthropic mô tả Sonnet 4.5 là mô hình hiện đại có alignment tốt nhất mà họ từng phát hành. Họ cho biết rằng so với các phiên bản Claude trước đây, Sonnet 4.5 đã giảm đáng kể các hành vi không mong muốn như:

  • Sycophancy (lấy lòng người dùng quá mức)
  • Deception (lừa dối hoặc đưa thông tin sai)
  • Power-seeking (tự nâng quyền lực)
  • Khuyến khích ảo tưởng hoặc suy nghĩ sai lệch (encouraging delusional thinking)

Ngoài ra, để đối phó với rủi ro khi mô hình tương tác với công cụ (agent, prompt injection), họ đã có những bước tiến cải thiện trong bảo vệ chống prompt injection — một trong những lỗ hổng nghiêm trọng nhất khi dùng mô hình kết hợp công cụ.

Sonnet 4.5 được phát hành dưới AI Safety Level 3 (ASL-3), theo khung bảo vệ của Anthropic, với các bộ lọc (classifiers) để phát hiện các input/output có nguy cơ cao — đặc biệt liên quan đến vũ khí hóa học, sinh học, hạt nhân (CBRN).

Họ cũng nói rõ: các bộ lọc đôi khi sẽ “cảnh báo nhầm” (false positives), nhưng Anthropic đã cải thiện để giảm tỷ lệ báo nhầm so với trước — kể từ phiên bản Opus 4, tỷ lệ nhầm được giảm mạnh.

Việc đưa thông tin này vào blog (với giải thích dễ hiểu) sẽ giúp độc giả thấy rằng Sonnet 4.5 không đơn thuần là “thêm mạnh hơn”, mà cũng là “thêm an toàn”.

2.2 Nâng cấp công cụ & trải nghiệm người dùng

Một loạt tính năng mới và cải tiến trải nghiệm được Anthropic công bố:

  • Checkpoints trong Claude Code: Bạn có thể lưu tiến độ và “quay lui” về trạng thái trước đó nếu kết quả không như ý.
  • Giao diện terminal mới & extension VS Code gốc: để người dùng phát triển dễ dùng hơn trong môi trường quen thuộc.
  • Context editing (chỉnh ngữ cảnh) & memory tool trong API: giúp agent chạy dài hơi, duy trì bối cảnh xuất hiện trong prompt, xử lý phức tạp hơn.
  • Trong ứng dụng Claude (trên web/app), tích hợp thực thi mã (code execution)tạo file (spreadsheet, slide, document) ngay trong cuộc hội thoại.
  • Claude for Chrome extension (cho người dùng Max) — giúp Claude tương tác trực tiếp qua trình duyệt, lấp đầy form, điều hướng web, v.v.
  • Claude Agent SDK: Anthropic mở nền tảng cho các nhà phát triển xây dựng agent dựa trên cơ sở mà Claude dùng. SDK này chứa các thành phần họ đã phát triển cho Claude Code: quản lý memory, quyền kiểm soát, phối hợp sub-agent, v.v.
  • Research preview “Imagine with Claude”: một chế độ thử nghiệm cho phép Claude tạo phần mềm “on the fly”, không dùng mã viết sẵn, phản ứng tương tác theo yêu cầu của người dùng — được mở cho người dùng Max trong 5 ngày.

Những điểm này là “chất” để bạn thêm vào blog khiến nó hấp dẫn và mang tính cập nhật kỹ thuật cao.

2.3 Hiệu năng & benchmark đáng chú ý

Anthropic cung cấp các con số benchmark để thể hiện bước nhảy lớn của Sonnet 4.5:

  • Trên SWE-bench Verified (benchmark chuyên về khả năng lập trình thực tế), Sonnet 4.5 được cho là state-of-the-art.
  • Họ dùng phép thử: 77,2 %, tính trung bình 10 lần thử nghiệm, không dùng thêm compute khi test, và budget “thinking” 200K tokens.
  • Với cấu hình 1M context, có thể đạt 82,0 %.
  • Trên OSWorld (benchmark thử AI sử dụng máy tính thực: tương tác máy tính, trang web, file, lệnh), Sonnet 4.5 đạt 61,4 %, vượt Sonnet 4 trước đó (42,2 %).
  • Trong các lĩnh vực chuyên môn như tài chính, y tế, luật, STEM, Sonnet 4.5 thể hiện kiến thức và reasoning tốt hơn so với các mô hình cũ (bao gồm Opus 4.1).
  • Anthropic cũng nói rằng người dùng đã thấy mô hình giữ “focus” trong hơn 30 giờ khi thực hiện tác vụ phức tạp đa bước.

Khi bạn đưa vào blog, bạn nên giải thích những con số này (ví dụ: SWE-bench là gì, OSWorld là gì), để độc giả không chuyên cũng hiểu giá trị của việc tăng từ 42 % lên 61 %, hay “giữ 30 giờ” là gì trong bối cảnh AI.

2.5 Ưu điểm về chi phí & khả năng chuyển đổi

Một điểm rất hấp dẫn mà Anthropic nhấn mạnh: giá sử dụng Sonnet 4.5 giữ nguyên như Sonnet 4 — không tăng phí, vẫn là $3 / $15 per million tokens (theo gói)

Họ cũng nhấn rằng Sonnet 4.5 là bản “drop-in replacement” cho Sonnet 4 — tức là nếu bạn đang dùng Sonnet 4 qua API hay ứng dụng Claude, bạn có thể chuyển sang Sonnet 4.5 mà không cần thay đổi nhiều.

Điều này làm tăng sức hấp dẫn của việc nâng cấp từ các phiên bản cũ lên Sonnet 4.5 — vì bạn được lợi nhiều hơn mà không phải trả thêm.

2.6 Thông tin kỹ thuật & lưu ý từ hệ thống (system card)

Trong thông báo, Anthropic cũng nhắc đến system card đi kèm Sonnet 4.5 — nơi họ công bố chi tiết hơn về các đánh giá an toàn, mitigations, phương pháp thử nghiệm, các chỉ số misaligned behaviors, cách họ đo lường prompt injection, v.v.

Ví dụ, trong system card có:

  • Biểu đồ “misaligned behavior scores” (hành vi lệch chuẩn) — càng thấp càng tốt — được đo qua hệ thống auditor tự động.
  • Phương pháp thử nghiệm và footnotes cho các benchmark: cách họ test SWE-bench, OSWorld, Terminal-Bench, τ2-bench, AIME, MMMLU, Finance Agent.
  • Ghi chú rằng các khách hàng trong ngành an ninh mạng, nghiên cứu sinh học, v.v. có thể được vào allowlist nếu cần vượt hạn chế CBRN.

3. Những cải tiến chính trong phiên bản 4.5

3.1 Hiệu năng lập trình & agent

Một trong những điểm mạnh lớn mà Sonnet 4.5 hướng tới là năng lực lập trình thực tế. Trên benchmark SWE-bench Verified, nó đạt ~ 77,2 % (khi test với scaffold, không dùng thêm compute), và ở cấu hình 1M context có thể lên đến ~ 82,0 %. Trong các thử nghiệm nội bộ, nó có thể giữ trạng thái làm việc liên tục hơn 30 giờ cho các tác vụ phức tạp.

Khi so sánh với Sonnet 4 trước đó, Sonnet 4.5 đạt 61,4 % trên benchmark OSWorld (AI thực thi máy tính thực tế), trong khi Sonnet 4 chỉ có ~ 42,2 %. Đây là bước nhảy lớn trong khả năng AI “dùng máy tính như người dùng thật”.

Ngoài ra, Sonnet 4.5 được thiết kế để thực thi nhiều lệnh song song (“parallel tool execution”) — ví dụ chạy nhiều lệnh bash trong một ngữ cảnh — giúp tận dụng tối đa “actions per context window” (số hành động trên khung ngữ cảnh) hiệu quả hơn.

3.4 Trải nghiệm người dùng & công cụ hỗ trợ

Sonnet 4.5 không chỉ mạnh mà còn dễ dùng:

  • Checkpoints trong Claude Code: cho phép người dùng lưu trạng thái, quay trở lại nếu cần.
  • Giao diện terminal mới, extension VS Code tích hợp gốc — giúp developer làm việc trong môi trường quen thuộc.
  • Context editing (chỉnh ngữ cảnh) và memory tool trong API: giúp agent theo dõi ngữ cảnh, nhớ các bước trước và hoạt động trong tác vụ dài hơn.
  • Trong ứng dụng Claude (app/web): hỗ trợ thực thi mãtạo file (spreadsheet, slide, document) ngay trong cuộc hội thoại — không cần chuyển sang công cụ ngoài.
  • Claude for Chrome: tiện ích mở rộng cho người dùng Max — giúp Claude tương tác trực tiếp với trang web: điều hướng, điền form, xử lý các tương tác web.
  • Claude Agent SDK: Anthropic mở mã để người dùng / developer xây agent dựa trên nền tảng mà Claude sử dụng — từ memory management đến phối hợp sub-agent, quyền kiểm soát, v.v.
  • Imagine with Claude: bản thử nghiệm (research preview) cho phép Claude “sáng tạo phần mềm on the fly” — nghĩa là không có phần mã viết sẵn, mà mô hình tự sinh & điều chỉnh theo yêu cầu người dùng. Được cung cấp cho người dùng Max trong 5 ngày.
3.3 An toàn và alignment

Sonnet 4.5 không chỉ mạnh mà còn chú trọng an toàn:

  • Áp dụng các bộ lọc (classifiers) để phát hiện các input/output nguy hiểm, đặc biệt trong các lĩnh vực CBRN — nhằm hạn chế khả năng sử dụng mô hình cho vũ khí hóa học, sinh học, hạt nhân.
  • Các bộ lọc này đôi khi “cảnh báo nhầm” (false positives), nhưng Anthropic đã cải tiến để giảm tỷ lệ này: so với trước, giảm 10× từ bản gốc, và giảm 2× so với Opus 4.
  • Việc phát hành ở mức AI Safety Level 3 (ASL-3) cho thấy Anthropic đặt giới hạn truy cập và bảo vệ bổ sung theo khả năng mô hình.
  • Biểu đồ “misaligned behavior scores” (điểm hành vi lệch chuẩn) được công bố — thể hiện mức độ giảm các hành vi như deception, sycophancy, power-seeking, khuyến khích ảo tưởng.
  • Bảo vệ chống prompt injection được cải thiện đáng kể, đặc biệt quan trọng khi mô hình dùng công cụ/agent.

Những yếu tố này rất quan trọng để người dùng tin tưởng dùng Sonnet 4.5 trong môi trường sản xuất, doanh nghiệp, ứng dụng thực tế.

3.4 Chi phí & chuyển đổi dễ dàng

Một điểm hấp dẫn là giá vẫn giữ như Sonnet 4: không tăng phí, vẫn là $3/$15 per million tokens (tùy gói)

Anthropic cho biết Sonnet 4.5 là drop-in replacement — tức nếu bạn đang dùng Sonnet 4 qua API hoặc ứng dụng, bạn có thể chuyển sang Sonnet 4.5 mà không cần thay đổi nhiều code hoặc cấu hình.

Đây là chi tiết quan trọng để độc giả của blog thấy rằng “nâng cấp” không đồng nghĩa “tăng chi phí lớn”.

4. Ứng dụng thực tiễn & tiềm năng nổi bật

Với những cải tiến kể trên, Claude Sonnet 4.5 có thể được ứng dụng mạnh trong nhiều lĩnh vực — phần này bạn có thể minh họa thêm bằng ví dụ thực tế trong blog của bạn.

4.1 Lập trình & phát triển phần mềm

  • Tạo mã (code generation) từ module nhỏ đến hệ thống lớn
  • Tự động sửa lỗi, refactor code, test, deploy
  • Phối hợp agent để quản lý dự án lập trình — chia nhỏ tác vụ, kiểm soát tiến độ
  • Hỗ trợ developer trong IDE (nhờ extension VS Code)

Ví dụ từ Anthropic: Sonnet 4.5 có thể hiểu mẫu mã code của một codebase lớn, thực hiện debug và kiến trúc theo ngữ cảnh cụ thể của dự án.

4.2 Ứng dụng doanh nghiệp & phân tích

  • Tự động hóa quy trình nội bộ: trích xuất, tổng hợp báo cáo, phân tích dữ liệu
  • Hỗ trợ phân tích tài chính, mô hình rủi ro, dự báo
  • Trong lĩnh vực pháp lý: phân tích hồ sơ kiện tụng, tổng hợp bản ghi, soạn bản nháp luật, hỗ trợ CoCounsel (như trích dẫn trong bài)
  • Trong an ninh mạng: red teaming, phát hiện lỗ hổng, tạo kịch bản tấn công (Anthropic trích dẫn việc Sonnet 4.5 được dùng cho các công ty an ninh mạng để giảm “vulnerability intake time” 44 % và tăng độ chính xác 25 %)

4.3 Trợ lý ảo – công việc văn phòng

  • Trong ứng dụng Claude: tạo slide, bảng tính, file văn bản trực tiếp từ cuộc hội thoại
  • Hỗ trợ xử lý email, lập kế hoạch, tổng hợp nội dung, viết báo cáo
  • Tương tác với nhiều hệ thống qua API, làm các tác vụ đa bước

4.4 Agent thông minh & tác vụ liên tục

Nhờ khả năng duy trì ngữ cảnh, nhớ lâu và tương tác công cụ, Sonnet 4.5 rất phù hợp để xây agent đa bước, làm việc liên tục qua nhiều giờ:

  • Quản lý dự án (lập kế hoạch → giám sát → báo cáo)
  • Agent giám sát, tự động hóa pipeline (CI/CD, triển khai sản phẩm)
  • Agent tương tác đa hệ thống (hệ thống CRM, ERP, API bên ngoài)
  • Agent tự điều chỉnh dựa trên phản hồi mới

Anthropic nhắc rằng Sonnet 4.5 có thể “giữ 30+ giờ tự chủ trong mã” — tức là trong tác vụ lập trình liên tục, mô hình vẫn giữ mạch lạc và không “rơi rụng”.

5. So sánh Sonnet 4.5 với các mô hình khác & ưu nhược điểm

Phần này giúp độc giả định vị Sonnet 4.5 trong “bản đồ AI” hiện tại.

5.1 So với Claude phiên bản trước (Sonnet 4, Opus 4)

Ưu điểm của 4.5 so với Sonnet 4 / Opus 4:

  • Nâng cao khả năng sử dụng công cụ & tương tác thực tế (OSWorld từ ~42,2 % lên ~61,4 %)
  • Tăng độ ổn định / duy trì trạng thái lâu hơn (“30+ giờ”)
  • Checkpoints, context editing, memory tool — các tính năng mà Sonnet 4 không có
  • Giá giữ nguyên so với Sonnet 4
  • Kích hoạt SDK agent, mở đường cho người dùng xây agent tùy biến
  • Cải thiện an toàn và alignment

Hạn chế so với Opus / mô hình cao cấp:

  • Có thể Opus 4 vẫn có lợi thế trong một số bài toán reasoning cực lớn
  • Sonnet 4.5 là phiên bản “cân bằng” — nếu bạn cần năng lực cực hạn, Opus có thể vẫn vượt trội
  • Dù giảm lỗi, Sonnet 4.5 vẫn có thể có sai sót trong môi trường thực, đặc biệt trong các domain ngoài dữ liệu huấn luyện

5.2 So với GPT-4 / GPT-5 / Gemini / các LLM khác

Lợi thế của Sonnet 4.5:

  • Khả năng dùng máy tính & thực thi công cụ nội tại — điểm mà GPT truyền thống cần mô hình kết hợp môi trường để làm
  • Agent lâu dài, giữ trạng thái dài, xử lý tác vụ đa bước
  • Tích hợp tính năng code execution, file creation ngay trong mô hình
  • Chi phí “không tăng khi nâng cấp” — tạo động lực để chuyển
  • An toàn & alignment là một trong các ưu tiên thiết kế

Thách thức so với GPT / Gemini:

  • Ecosystem plugin / cộng đồng hỗ trợ GPT / Gemini lớn hơn — nhiều tài nguyên, thư viện, ứng dụng kèm
  • GPT / Gemini có thể mạnh hơn về “ngôn ngữ tự nhiên / creative writing” trong nhiều tình huống
  • Tốc độ inference, độ trễ, khả năng mở rộng thực tế có thể là điểm yếu nếu triển khai không tốt

5.3 Ưu điểm & hạn chế tổng quan

Ưu điểm:

  • Kết hợp tốt giữa sức mạnh và khả năng dùng trong thực tế
  • Được cải tiến nhiều tính năng hữu ích (checkpoints, memory, chỉnh ngữ cảnh)
  • An toàn hơn — giảm nhiều loại hành vi không mong muốn
  • Giá ổn định, chuyển đổi dễ
  • Được phản hồi tích cực từ người dùng thật sự

Hạn chế & rủi ro:

  • Không hoàn hảo — vẫn có thể “bịa”, sai logic, đặc biệt trong domain mới
  • Khi agent liên tục tự hành động, nếu prompt hoặc giám sát không chặt có thể gây lỗi nghiêm trọng
  • Việc triển khai thực tế (cơ sở hạ tầng, độ ổn định, tài nguyên) là thách thức lớn
  • Mô hình mới nhanh chóng — Sonnet 4.5 có thể bị vượt nếu Anthropic hoặc đối thủ không tiếp tục đổi mới

6. Kết luận & lời khuyên cho người dùng

Claude Sonnet 4.5 là một bước tiến ấn tượng trong dòng Claude: nó mang lại năng lực cao hơn trong lập trình, tương tác công cụ, agent lâu dài và các ứng dụng thực tế. Nếu được sử dụng đúng cách, nó có thể là trợ thủ đắc lực cho lập trình viên, nhà phân tích, đội phát triển sản phẩm, và nhiều lĩnh vực khác.

Tuy nhiên, không có mô hình AI nào hoàn hảo. Người dùng cần hiểu đúng điểm mạnh, điểm yếu, luôn giám sát kết quả, thiết lập kiểm soát và luôn cập nhật khi có phiên bản mới.

Nếu bạn là nhà phát triển, nhà phân tích hay người chủ doanh nghiệp, Claude Sonnet 4.5 có thể là lựa chọn đáng cân nhắc cho các nhiệm vụ có tính logic cao, cần tương tác công cụ, hoặc muốn xây agent thông minh.

Exploring Claude Code Subagents: A Demo Setup for a RAG-Based Website Project

1. Introduction

Recently, Anthropic released an incredible new feature for its product Claude: subagents — secondary agents with specific tasks for different purposes within a user’s project.

2. Main Content

a. How to Set It Up:
First, install Claude using the following command in your Terminal window:

npm i @anthropic-ai/claude-code

If Claude is already installed but it’s an older version, it won’t have the subagent feature.

to update claude, command : claude update

Launch Claude Code in your working directory, then run the command:
/agents

Press Enter, and a management screen for agents will appear, allowing you to start creating agents with specific purposes for your project.

Here, I will set it up following Claude’s recommendation.

After the setup, I have the following subagents:

I will ask Claude to help me build a website using RAG with the following prompt:

The first subagents have started working.

The setup of the RAG project has been completed.

However, I noticed that the subagent ‘production-code-reviewer (Review RAG system code)’ didn’t function after the coding was completed. It might be an issue with my prompt, so I will ask Claude to review the code for me

After the whole working process, Claude Code will deliver an excellent final product.
Link: https://github.com/mhieupham1/claudecode-subagent

3. Conclusion

Through the entire setup process and practical use in a project, it’s clear how powerful and beneficial the Sub-agents feature introduced by Anthropic for Claude Code is. It enables us to have AI “teammates” with specialized skills and roles that operate independently without interfering with each other — allowing projects to be organized, easy to understand, and efficient.

Combining tmux and Claude to Build an Automated AI Agent System (for Mac & Linux)

1. Introduction

With the rapid growth of AI, multi-agent systems are attracting more attention due to their ability to coordinate, split tasks, and handle complex automation. An “agent” can be an independent AI responsible for a specific role or task.

In this article, I’ll show you how to combine tmux (a powerful terminal multiplexer) with Claude (Anthropic’s AI model) to build a virtual organization. Here, AI agents can communicate, collaborate, and work together automatically via the terminal.

 

2. What is tmux?

tmux lets you split your terminal into multiple windows or sessions, each running its own process independently. Even if you disconnect, these sessions stay alive. This is super useful when you want to run several agents in parallel, each in their own terminal, without interfering with each other.

 

3. What is Claude?

Claude is an advanced language AI model developed by Anthropic. It can understand and respond to text requests, and it’s easy to integrate into automated systems—acting as a “virtual employee” taking on part of your workflow.

 

4. Why combine tmux and Claude?

Parallel & Distributed: Each agent is an independent Claude instance running in its own tmux session.

Workflow Automation: Easily simulate complex workflows between virtual departments or roles.

Easy Debug & Management: You can observe each agent’s logs in separate panes or sessions.

 

5. System Architecture

Let’s imagine a simple company structure:

PRESIDENT: Project Director (sets direction, gives instructions)

boss1: Team Leader (splits up tasks)

worker1, worker2, worker3: Team members (do the work)

Each agent has its own instruction file so it knows its role when starting up.

Agents communicate using a script:

./agent-send.sh [recipient] “[message]”

Workflow:

PRESIDENT → boss1 → workers → boss1 → PRESIDENT

 

6. Installation

Since the code is a bit long, I’ll just share the GitHub link to keep things short.

tmux:
Install guide: tmux Installing Guide

Claude:
Install guide: Claude Setup Guide

Git:
Install guide: Git Download

Clone the project:

bash
git clone https://github.com/mhieupham1/claudecliagent

 

Inside, you’ll find the main folders and files:

CLAUDE.md: Describes the agent architecture, communication, and workflows.

instructions/: Contains guidance for each role.

.claude/: JSON files to manage permissions for bash scripts.

setup.sh: Launches tmux sessions for PRESIDENT, boss1, worker1, worker2, worker3 so agents can talk to each other.

agent-send.sh: Script for sending messages between agents.

 

7. Deployment

Run the setup script:

bash
./setup.sh
This will create tmux sessions for PRESIDENT and the agents (boss1, worker1, worker2, worker3) in the background.

To access the PRESIDENT session:

bash
tmux attach-session -t president


To access the multiagent session:

bash
tmux attach-session -t multiagent


In the PRESIDENT session, run the claude command to set up the Claude CLI.

Do the same for the other agents.

Now, in the PRESIDENT window, try entering a request like:

you are president. create a todo list website now
PRESIDENT will start the to-do list. PRESIDENT will send instructions to boss1, boss1 will assign tasks to worker1, worker2, and worker3.

You can watch boss1 and the workers do their jobs, approve commands to create code files, and wait for them to finish.

Result:

8. Conclusion

Combining tmux and Claude lets you create a multi-agent AI system that simulates a real company: communicating, collaborating, and automating complex workflows. Having each agent in its own session makes it easy to manage, track progress, and debug.

This system is great for AI research, testing, or even real-world workflow automation, virtual team assistants, or teamwork simulations.

If you’re interested in developing multi-agent AI systems, try deploying this model, customize roles and workflows to your needs, and feel free to contribute or suggest improvements to the original repo!

Introducing Claude 4 and Its Capabilities

Claude 4 refers to the latest generation of AI models developed by Anthropic, a company founded by former OpenAI researchers. The most powerful model in this family as of June 2024 is Claude 3.5 Opus, often informally called “Claude 4” due to its leap in performance.

Claude Opus 4 is powerful model yet and the best coding model in the world, leading on SWE-bench (72.5%) and Terminal-bench (43.2%). It delivers sustained performance on long-running tasks that require focused effort and thousands of steps, with the ability to work continuously for several hours—dramatically outperforming all Sonnet models and significantly expanding what AI agents can accomplish.

Claude Opus 4 excels at coding and complex problem-solving, powering frontier agent products. Cursor calls it state-of-the-art for coding and a leap forward in complex codebase understanding. Replit reports improved precision and dramatic advancements for complex changes across multiple files. Block calls it the first model to boost code quality during editing and debugging in its agent, codename goose, while maintaining full performance and reliability. Rakuten validated its capabilities with a demanding open-source refactor running independently for 7 hours with sustained performance. Cognition notes Opus 4 excels at solving complex challenges that other models can’t, successfully handling critical actions that previous models have missed.

Claude Sonnet 4 significantly improves on Sonnet 3.7’s industry-leading capabilities, excelling in coding with a state-of-the-art 72.7% on SWE-bench. The model balances performance and efficiency for internal and external use cases, with enhanced steerability for greater control over implementations. While not matching Opus 4 in most domains, it delivers an optimal mix of capability and practicality.

GitHub says Claude Sonnet 4 soars in agentic scenarios and will introduce it as the model powering the new coding agent in GitHub Copilot. Manus highlights its improvements in following complex instructions, clear reasoning, and aesthetic outputs. iGent reports Sonnet 4 excels at autonomous multi-feature app development, as well as substantially improved problem-solving and codebase navigation—reducing navigation errors from 20% to near zero. Sourcegraph says the model shows promise as a substantial leap in software development—staying on track longer, understanding problems more deeply, and providing more elegant code quality. Augment Code reports higher success rates, more surgical code edits, and more careful work through complex tasks, making it the top choice for their primary model.

These models advance our customers’ AI strategies across the board: Opus 4 pushes boundaries in coding, research, writing, and scientific discovery, while Sonnet 4 brings frontier performance to everyday use cases as an instant upgrade from Sonnet 3.7.

 

 


Key Strengths of Claude 4

 1. Superior Reasoning and Intelligence

Claude 4 ranks at the top in benchmark evaluations such as:

  • MMLU (Massive Multitask Language Understanding)

  • GSM8k (math problem solving)

  • HumanEval (coding)
    It rivals or exceeds OpenAI’s GPT-4-turbo and Google Gemini 1.5 Pro in complex reasoning, long-context understanding, and task execution.

 2. Massive Context Window (Up to 200K Tokens)

Claude 4 can read and reason over hundreds of pages at once, making it perfect for:

  • Analyzing lengthy legal or scientific documents

  • Comparing large codebases

  • Summarizing long texts or reports

 3. Advanced Coding Support

Claude 4 excels in:

  • Writing and explaining code in multiple languages (Python, JS, Java, etc.)

  • Debugging and understanding large code repositories

  • Pair programming and iterative development tasks

 4. Natural and Helpful Communication

  • Responses are clear, polite, and structured

  • Especially strong in creative writing, professional emails, and educational explanations

  • Can follow complex instructions and maintain context over long conversations


Safe and Aligned by Design

Claude is built with safety and alignment in mind:

  • It avoids generating harmful or unethical content

  • It is more cautious and transparent than most models

 


 How to Access or Use Claude 4

Claude is a cloud-based AI model, so you don’t install it like software — instead, you access it via the web or API.

1. Use Claude via Web App

 Steps:

  1. Go to: https://claude.ai

  2. Sign up or log in (you need a US/UK/Canada/EU phone number).

  3. Choose from free or paid plan (Claude 3.5 Opus is available only in Claude Pro – $20/month).

 Claude Pro Includes:

  • Claude 3.5 Opus (latest, most powerful)

  • Larger context

  • Priority access during high demand

 Currently, Claude is only available in select countries. If you’re outside the US/UK/Canada/EU, you may need to use a VPN and a virtual phone number to sign up (unofficial workaround).


2.  Use Claude via API (For Developers)

 API Access:

  1. Go to: https://console.anthropic.com

  2. Sign up and get an API key

  3. Use the API with tools like Python, cURL, or Postman

 Example (Python):

import anthropic

client = anthropic.Anthropic(api_key="your_api_key")

response = client.messages.create(
model="claude-3.5-opus-20240620",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
)

print(response.content)


Can I Install Claude Locally?

No. Like ChatGPT or Gemini, Claude is not open-source or downloadable. It’s only available via:

 

Feature Claude 4 (Claude 3.5 Opus)
Developer Anthropic
Model Type Large Language Model (LLM)
Reasoning & Math Top-tier performance
Context Length Up to 200,000 tokens
Code Assistance Strong support for multiple languages
Language Style Human-like, calm, professional
Best Use Cases Analysis, writing, coding, dialogue
Access claude.ai or API

A Step-by-Step Guide to Integrating and Using Claude Code Action on GitHub

Investigate how Claude Code Action is great. Just create an issue and put  a mention to Claude  like @claude, Claude can write the code automatically

Introduction

In the current era of rapidly evolving technology, artificial intelligence (AI) 

stands out as one of the most significant and transformative breakthroughs on a global scale. Among the various AI-driven tools, Claude — particularly the Claude Action Code — represents a powerful integration that can be embedded into user’s GitHub repositories to address raised issues with remarkable accuracy and efficiency. This paper aims to explore the capabilities and applications of Claude Action Code in modern software development workflows.

Body content

Claude Code Action is a extension categorized as a “Action” and made available on the GitHub Marketplace by Anthropic. Users can search for and utilize it by following the provided setup instructions outlined in the README documentation. Below is a summary of the basic setup steps for integrating Claude Code Action into user’s GitHub repository: 

1.Create a workflow folder:

On GitHub: In user’s GitHub repository, click “Add file”:

insert the configuration into the path:“.git/workflows/[file_name].yml”. For instance: 

Next, insert the appropriate workflow configuration for this extension, depending on your intended use:

For example: 

name: Claude PR Assistant

on:

  issue_comment:

    types: [created]

  pull_request_review_comment:

    types: [created]

  issues:

    types: [opened, assigned]

  pull_request_review:

    types: [submitted]

 

jobs:

  claude-code-action:

    if: |

      (github.event_name == ‘issue_comment’ && 

contains(github.event.comment.body, ‘@claude’)) ||

      (github.event_name == ‘pull_request_review_comment’ && contains(github.event.comment.body, ‘@claude’)) ||

      (github.event_name == ‘pull_request_review’ && 

contains(github.event.review.body, ‘@claude’)) ||

      (github.event_name == ‘issues’ && contains(github.event.issue.body, ‘@claude’))

    runs-on: ubuntu-latest

    permissions:

      contents: write

      pull-requests: read

      issues: read

      id-token: write

    steps:

      – name: Checkout repository

        uses: actions/checkout@v4

        with:

          fetch-depth: 1

 

      – name: Run Claude PR Action

        uses: anthropics/claude-code-action@beta

        with:

          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

          timeout_minutes: “60”

Then, click “Commit changes” to successfully add the configuration to your repository.

On the user’s local machine: If a folder in VScode has already  been connected to the GitHub repository, the user can manually create a workflow directory and a .yml file to store the Claude configuration. Then, file can be pushed to the GitHub repository

2.API key:

  • After that, the API key should be added to the repository’s Secrets under the Setting tab, rather than being hard-coded directly into workflow file to prevent unauthorized access

 

Find Action in Secret and variables

Create a new repository secret

Add your API key to Secret’s description

Name secret as key’s name in the workflow file

✅Correct

❌Never do it

3. Using Claude Code Action

User creates a new issue within repository where Claude is intended to be used: 

The user describes the issue to be resolved – such as feature creation, bug fixing, code review, …  – in the issue’s description. You can tag “@claude” directly in the description or in a comment after the issue is created, in order trigger Claude to process the request

Ex: Ask Claude to generate complete login and registration pages based on the initial files in the repo

Claude is invoked via API to address the issue described, with the response time depending on the complexity of the request. It uses the token associated with your API key to read the issue content as well as to create or modify code within the repository

Claude’s response will appear in the comments section of the issue.

Here, Claude generates additional files, for example register.html and dashboard.html, as part of the requested implementation and show what changes are made to each file — including which parts are added, modified, or deleted.

At this point, Claude has created a separate branch in the repository containing the proposed changes. The user can then review and consider merging these updates into the main branch via a pull request.

After successfully merging into the main branch

 

Following a successful merge, the issue may be closed. At this point, Claude has been effectively utilized to generate complete, functional demo pages for user login and registration.

 

4.Result:

Registration page

Login screen

Dashboard screen

In summary, Claude Code Action proves to be a highly effective tool for streamlining development tasks, making it easier for both individuals and teams to enhance productivity.