Search by labels

Wednesday, May 6, 2026

Building a QA Postman Expert in ChatGPT for API Testing Workflows

 AI tools can be very useful for QA engineers when they are configured for specific workflows instead of used as generic chat assistants.

One practical example is a QA Postman Expert: a custom ChatGPT assistant focused on API testing, Postman collections, test case generation, and debugging.


Why Build a QA Postman Expert?

API testing often includes repetitive tasks such as:

  • generating Postman test scripts
  • creating positive and negative API test cases
  • reviewing assertions
  • debugging failed tests
  • validating response schemas
  • checking authentication flows
  • preparing regression coverage

A specialized GPT can help standardize these tasks and reduce repeated manual effort.


GPTs vs Skills vs Agents

Before building the assistant, it helps to understand the difference between the main ChatGPT features.

GPTs

custom GPT is the main expert assistant.

It can be configured with:

  • custom instructions
  • knowledge files
  • conversation starters
  • optional tools and actions

For this use case, the custom GPT becomes the QA Postman Expert.

Skills

Skills are reusable workflows or procedures.

Examples:

  • review a Postman collection
  • generate API regression tests
  • analyze a Swagger/OpenAPI file
  • check assertion quality

A useful mental model:

GPT = expert behavior
Skill = reusable workflow

Agents

Agents are execution-focused.

They can perform multi-step actions such as:

  • opening a website
  • navigating pages
  • testing UI flows
  • collecting results
  • creating a structured report

For example, an agent could open a demo website and perform a sanity test.


Availability Note

The exact availability of these features depends on the ChatGPT plan.

In general:

  • creating a custom GPT requires a paid ChatGPT plan
  • Skills are mainly available on workspace-oriented plans such as Business, Enterprise, Edu, Teachers, and Healthcare
  • Agent mode is available on selected paid plans, with limits depending on the subscription

Because availability can change, it is worth checking the current ChatGPT plan and workspace permissions before starting.


Creating the QA Postman Expert

To create the assistant:

Explore GPTs → Create

Then configure:

  • Name
  • Description
  • Instructions
  • Knowledge files
  • Conversation starters

Suggested name:

QA Postman Expert

Suggested description:

Helps QA engineers design, review, debug, and improve Postman API tests, collections,
assertions, environments, and regression testing workflows.

Example Instructions

The instructions define how the GPT behaves.

Example:

You are a senior QA API testing expert specialized in Postman.

Your responsibilities include:
- generating API test cases
- reviewing Postman collections
- writing Postman-compatible JavaScript
- improving assertion quality
- identifying weak validations
- generating positive, negative, boundary, auth, validation, and regression scenarios
- improving variable and environment handling

Prefer practical, copy-paste-ready outputs.
Use tables for test cases.
Use Postman-compatible JavaScript for scripts.
Be concise and focus on actionable QA improvements.

Recommended Knowledge Files

Knowledge files are what make the assistant useful in a real project.

Good files to upload include:

Postman Collections

{name}.postman_collection.json

These help the assistant understand endpoint structure, request chaining, variables, and existing assertions.

Swagger / OpenAPI Specs

openapi.yaml
swagger.json

These help generate coverage and identify missing test scenarios.

QA Playbook

QA_API_Playbook.md

Include standards such as:

  • naming conventions
  • assertion rules
  • negative testing strategy
  • response validation expectations
  • regression guidelines

Authentication Documentation

auth-flow.md
jwt-token-guide.md

This helps with token handling, authorization scenarios, and security-related testing.


Example Prompts

Once configured, the assistant can be used with prompts like:

  • Review this Postman collection and suggest improvements.
  • Generate positive, negative, boundary and regression test cases for this endpoint.
  • Write Postman test scripts for this response.
  • Debug this failing Postman assertion.

Suggested QA AI Setup

A practical long-term setup could look like this:

QA Workspace

├── GPT: QA Postman Expert
├── GPT: FE Automation Expert
├── GPT: Test Case Generator

├── Skill: Postman Collection Reviewer
├── Skill: API Regression Generator

└── Agent: QA Sanity Test Runner

This separates:

  • expert behavior
  • reusable workflows
  • execution automation

Key Takeaways

A QA Postman Expert GPT is a practical way to support daily API testing work.

The most useful setup is:

  1. Create a focused custom GPT
  2. Add clear QA-specific instructions
  3. Upload real project knowledge
  4. Use Skills for repeatable workflows
  5. Use Agents later for action-based testing

The goal is not to replace QA engineers, but to reduce repetitive work, improve consistency, and make API testing workflows faster and more structured.

Wednesday, January 7, 2026

🤖 Supercharging Test Automation with Cursor: From Intent to a Production-Ready Playwright Framework

 Modern test automation often promises speed—but in reality, getting started usually means:

  • setting up Node and npm
  • installing Playwright
  • configuring TypeScript
  • wiring reporters, retries, CI
  • creating page objects
  • choosing stable locators
  • writing the first meaningful test suite

That’s hours of setup before you even validate a login form.

In this tutorial, I’ll show how Cursor, an AI-powered code editor, can generate a fully working Playwright test framework and test suite—including configuration, dependencies, locators, and scenarios—all out of the box.


🎯 What We’re Building

We’ll generate a complete Playwright Test (TypeScript) framework for this site:

👉 https://practicetestautomation.com/practice-test-login/

The final result includes:

  • ✅ Playwright + TypeScript setup
  • ✅ Page Object Model
  • ✅ Stable, role-based locators
  • ✅ Login test suite (positive & negative scenarios)
  • ✅ CI-ready configuration
  • ✅ GitHub Actions workflow
  • ✅ ESLint + Prettier
  • ✅ HTML reports, traces, screenshots

And yes—Cursor handles all of it.


🤖 Why Cursor Changes the Game for QA Automation

Traditional automation setup is repetitive and error-prone. Cursor flips the model:

With Cursor, you get:

  • 🚀 A production-ready framework generated instantly
  • 🧠 Context-aware test scenarios
  • 🎯 Correct locators chosen automatically
  • 🧱 Best practices applied by default
  • 🔁 Zero copy-paste from old projects

Instead of writing boilerplate, you focus on test intent and coverage.


🧠 Step 1: Describe What You Want (That’s It)

In Cursor, you don’t start by creating folders or config files.

You start by describing the outcome.

Here’s the exact prompt you can paste into Cursor 👇

You are a senior QA automation engineer. Create a Playwright Test framework in TypeScript that runs out-of-the-box and includes a complete, best-practice test suite for:

https://practicetestautomation.com/practice-test-login/


Hard requirements:

- Use Playwright Test + TypeScript.

- Provide a complete folder structure and all necessary files.

- Everything must work out-of-the-box after:

  1) npm install

  2) npx playwright install --with-deps

  3) npm test

- Use best practices: page objects, stable locators, fixtures, test isolation, clean assertions, retries in CI, trace/video/screenshots on failure, reporters.

- Do NOT use brittle CSS selectors. Prefer getByRole / getByLabel / getByText and data-testid if available.

- Make baseURL configurable via env (dotenv) and default to the target site.

- Add useful npm scripts (test, test:ui, test:headed, test:debug, lint, format).

- Include ESLint + Prettier config for TypeScript.

- Include a GitHub Actions workflow to run tests on push/PR with HTML report artifact upload.

- Add a README with exact commands and explanations.


Framework details to implement:

1) Project setup

- package.json with dependencies: @playwright/test, typescript, eslint, prettier, eslint-config-prettier, eslint-plugin-playwright, @typescript-eslint/*, dotenv

- tsconfig.json with sensible defaults.


2) Playwright configuration

- playwright.config.ts should:

  - set baseURL from process.env.BASE_URL with fallback to "https://practicetestautomation.com"

  - use testDir = "tests"

  - use expect timeout and action timeout reasonable defaults

  - configure projects: chromium, firefox, webkit

  - enable screenshot: only-on-failure, video: retain-on-failure, trace: retain-on-failure

  - set retries: 0 locally, 2 on CI (process.env.CI)

  - set reporter: list + html

  - forbidOnly on CI

  - fullyParallel true


3) Test architecture

- /pages:

  - LoginPage.ts

  - SecureAreaPage.ts

- /tests:

  - login.spec.ts covering valid login, invalid credentials, empty fields, logout

- /test-data:

  - users.ts with valid creds:

    - username: "student"

    - password: "Password123"

- /fixtures:

  - baseTest.ts exposing page objects


4) Robustness

- Web-first assertions

- Proper navigation handling

- Independent tests


Output:

- Print all files with full content

- Include a "How to run" section

⏱️ What Cursor Does for You Automatically

After submitting the prompt, Cursor:

🧱 Builds the entire framework

  • Creates package.jsontsconfig.jsonplaywright.config.ts
  • Installs Playwright Test correctly
  • Configures retries, reporters, artifacts

🧭 Understands the application

  • Identifies the login form
  • Picks role- and label-based locators
  • Detects success and error states

🧪 Generates meaningful test scenarios

  • Valid login
  • Invalid username
  • Invalid password
  • Empty credentials
  • Logout flow

🧼 Applies best practices automatically

  • Page Object Model
  • Test isolation
  • Web-first assertions
  • CI-friendly setup

No tutorials. No StackOverflow. No guesswork.



🎥 Live Cursor Demo



🏃 Step 2: Run the Tests (That’s All)

Once Cursor finishes, you can simply run:

npm install npx playwright install --with-deps npm test

Or ask Cursor to run it for you, as shown in the recording.

You immediately get:

  • ✅ Browser launch
  • ✅ Login tests executed
  • ✅ HTML report generated
  • ✅ Screenshots/traces on failure


🔥 Why This Matters for QA Teams

This isn’t just faster test writing.

It fundamentally changes how QA automation scales.

Traditional approach

  • Days of setup
  • Inconsistent patterns
  • Copy-paste frameworks
  • Tribal knowledge

Cursor + AI approach

  • Minutes to bootstrap
  • Consistent architecture
  • Senior-level patterns by default
  • QA focuses on risk, not boilerplate


🚀 Bonus Benefits You Might Not Expect

  • 🧠 Cursor adapts to your existing codebase
  • 🔁 Refactors tests as the app evolves
  • 🔍 Suggests missing scenarios
  • 🧪 Helps convert manual tests into automation
  • 📈 Improves onboarding for new QA engineers


🧩 Final Thoughts

Cursor doesn’t replace QA engineers.

It amplifies them.

Instead of spending energy on setup and syntax, you spend it on:

  • coverage
  • risk
  • quality
  • confidence

If you haven’t tried Cursor for test automation yet—this is your sign.


 

Monday, December 22, 2025

Creating a Fully Running Test Automation Framework in Under 3 Minutes with Cursor

In this tutorial, you’ll generate a clean, production-ready web UI automation framework in under 3 minutes using Cursor—and you’ll validate a real-world scenario: a failed login on SauceDemo for a locked-out user.

I’ll also embed my screen recording in the post so you can follow the exact flow I used end-to-end.


What you’ll build

A working Gradle project using:

  • Java 17
  • JUnit 5
  • Selenide
  • Gradle (Groovy DSL)

With best-practice structure:

  • Page Object Model (POM)
  • Clear separation:
    • test logic
    • page interactions
    • configuration
  • External configuration:
    • base URL
    • credentials
  • A real test:
    • open https://www.saucedemo.com
    • login as locked_out_user
  • Verify the error message:

      Epic sadface: Sorry, this user has been locked out.




       


      Step 1: Create an empty folder + open it in Cursor

      1. Create a new folder, e.g. saucedemo-selenide-junit5

      2. Open it in Cursor

      3. Make sure Cursor can create files in the project (normal default)

      That’s all—no manual Gradle init needed if you’re letting Cursor generate everything.


      Step 2: Paste this prompt into Cursor

      This is the exact prompt the framework is based on (copy/paste as-is):

      You are a Senior QA Automation Engineer.

      Create a clean, production-ready web UI test automation framework using:

      - Java 17

      - JUnit 5

      - Selenide

      - Gradle (Groovy DSL)


      Architecture & Best Practices:

      - Follow Page Object Model (POM).

      Separate:

      - Test logic

      - Page interactions

      - Configuration


      Use Selenide best practices:

      - No explicit waits

      - Centralized browser configuration

      - Stable, readable locators

      - Configuration must be externalized (URL, credentials).

      - Code must be readable, maintainable, and scalable.


      Project Structure:

      src

      └── test

          ├── java

          │   ├── base

          │   │   └── BaseTest.java

          │   ├── config

          │   │   └── TestConfig.java

          │   ├── pages

          │   │   └── LoginPage.java

          │   └── tests

          │       └── LoginTest.java

          └── resources

              └── application.properties


      Functional Requirements:

      1. Open https://www.saucedemo.com

      2. Try to login using:

      Username: locked_out_user

      Password: secret_sauce

      3. Verify that login fails with error message 

      "Epic sadface: Sorry, this user has been locked out."


      What to Generate:

      - build.gradle with all required dependencies

      - Browser and timeout configuration

      - Base test setup using JUnit 5 lifecycle hooks

      - LoginPage Page Object with actions and assertions

      - LoginTest using JUnit 5 and Selenide assertions

      - application.properties for configuration


      Execution:

      The project must be runnable with:

      ./gradlew test


      Generate complete, runnable code for all files.

      Why this prompt works

      It forces Cursor to:

      • generate all files, not fragments
      • follow a known structure
      • implement a real assertion
      • externalize config (so you don’t hardcode URLs and creds into tests)


      Step 3: Make sure Cursor generates these files

      After Cursor responds, your repository should include at least:

      • build.gradle
      • src/test/java/base/BaseTest.java
      • src/test/java/config/TestConfig.java
      • src/test/java/pages/LoginPage.java
      • src/test/java/tests/LoginTest.java
      • src/test/resources/application.properties

      If you’re missing any of these, ask Cursor:

      “Create the missing files and ensure the project runs with ./gradlew test.”


      Step 4: What the code should do (high-level)

      TestConfig.java

      Central place to configure Selenide:

      • base URL
      • browser (optional)
      • timeout
      • screenshot / reports folder (optional)

      BaseTest.java

      JUnit 5 lifecycle hooks:

      • set up config once (or before each test)
      • optionally clean browser state

      LoginPage.java (Page Object)

      Encapsulates:

      • locators (username, password, login button, error message)
      • actions (open page, login)
      • assertions (error visible + exact text)

      LoginTest.java

      Reads like a scenario:

      • open login page
      • attempt login as locked out user
      • assert correct error is shown


      Step 5: Run it

      From the project root:

      ./gradlew test

      You should see:

      • Gradle downloads dependencies
      • Selenide launches a browser
      • test runs
      • test passes ✅ (because the expected error is displayed)


      Expected behavior on SauceDemo

      The locked_out_user is a known user on SauceDemo that always fails login with the message:

      Epic sadface: Sorry, this user has been locked out.

      Your test should assert the exact message, not something vague—this is important for stable UI test feedback.


      Final Thoughts

      What you’ve seen here isn’t magic — it’s leverage.

      Cursor didn’t “replace” test automation skills. It amplified them.
      The reason this worked in under 3 minutes is because the intent, structure, and expectations were clear from the start.

      The real takeaway is this:

      • If you know what good architecture looks like
      • If you can describe clean separation of concerns
      • If you understand how tests should read and behave

      …then tools like Cursor become a serious productivity multiplier rather than a code generator you have to babysit.

      This small example already gives you:

      • a maintainable project structure
      • a real negative test with a meaningful assertion
      • a foundation you can safely extend in a real project

      From here, scaling is easy:

      • add more page objects
      • add reporting
      • add CI
      • add parallelism

      The hardest part — getting started correctly — is already done.

      If this tutorial helped you, try repeating the exercise with:

      • a positive login flow
      • a different browser
      • a new assertion
      • or an entirely different application

      You’ll quickly notice that the speed gain compounds.

      The future of test automation isn’t writing less code — it’s spending less time on the wrong code.