Complete API Validation Guide: What to Actually Test in Postman

Common API Response Issues and How to Catch Them

What I’m Actually Validating in Postman (Now That I Know Better)

The first time I got a 200 OK response in Postman, I literally pumped my fist in the air. “It works!” I announced to my empty home office (and slightly confused dog). I felt like a tech genius who had just hacked the mainframe. But I’ve since learned that proper API validation involves so much more than status codes.

Fast forward three months, and I’m cringing at that memory. Why? Because I’ve realized that celebrating a status code is like getting excited that a restaurant door opened for you – it says absolutely nothing about what’s being served inside.

My API testing journey has evolved dramatically since those early days, and I want to share what I’ve learned about what actually matters when testing APIs. Let’s walk through what I used to do, what I validate now, and the moment everything changed for me.

What I Used to Do: The API Testing Beginner Phase

When I first started with Postman, my “testing” process looked something like this:

  1. Send the request
  2. Look for 200 OK or 201 Created
  3. See some JSON in the response body
  4. Mark test as “passed”
  5. Move on to the next endpoint

In my defense, this approach wasn’t completely pointless. It confirmed basic connectivity and that the endpoint existed. But calling this “testing” is like saying you’ve “cleaned the house” when all you did was close the bedroom door so no one can see the mess.

My Learning Moment: During my early API testing days, I was focusing on the mechanics of sending requests rather than validating what came back. I was so caught up in the “how” of API testing that I overlooked the “what” and “why.”

The Moment It Clicked: When 200 OK Was Anything But OK

The turning point came during my testing of a user management API. I sent a GET request to fetch user data, and got my beloved 200 OK response. Test passed! Or so I thought.

When I actually looked at the response body, here’s what I found:

{
  "users": [
    {
      "id": 1,
      "name": null,
      "email": "",
      "created_at": "2023-14-01T25:61:00Z"
    }
  ]
}

The endpoint was returning:

  • A null value for a required field
  • An empty string for email (which shouldn’t be possible)
  • An impossible date format that would break any frontend parsing

Yet Postman’s cheerful 200 OK was telling me everything was fine. This was my “aha” moment – the API was technically responding, but the response was garbage. That’s when I realized status codes are just the beginning of API validation.

Career Changeup Tip: Coming from education systems operations, I relate API validation to checking student information systems. You wouldn’t just verify a student record exists – you need to make sure all their registration data, course enrollments, and academic history are accurate and properly formatted. Apply this same systematic verification approach to your API responses.

API Validation Checklist

The API Validation Checklist I Use Now

Today, my API testing is much more thorough. Here’s my actual validation checklist that I apply to each endpoint:

1. Status Code Validation

Yes, I still check status codes, but with more nuance:

  • 200-299: Success codes – but which specific one? (200 vs 201 vs 204 matters!)
  • 400-499: Client errors – is it the correct error for the situation?
  • 500-599: Server errors – should never happen in normal testing unless I’m stress testing

2. Response Schema Validation

Does the response follow the expected structure?

  • All expected fields are present
  • No unexpected fields appear
  • Nested objects and arrays have correct structure
  • Data types match expectations (strings, integers, booleans)

Here’s a snippet from one of my Postman tests that checks schema structure:

pm.test("Response has correct user schema", function() {
    const responseJson = pm.response.json();
    
    pm.expect(responseJson).to.have.property('users');
    pm.expect(responseJson.users).to.be.an('array');
    
    if(responseJson.users.length > 0) {
        pm.expect(responseJson.users[0]).to.have.property('id');
        pm.expect(responseJson.users[0]).to.have.property('name');
        pm.expect(responseJson.users[0]).to.have.property('email');
        pm.expect(responseJson.users[0]).to.have.property('created_at');
    }
});

3. Data Validation

Are the actual values meaningful and correct?

  • Required fields have non-null, non-empty values
  • Dates and times are in proper format and reasonable ranges
  • Email addresses follow correct format
  • Numerical values are within expected ranges
  • IDs are unique

My Learning Moment: I once missed a bug where an endpoint returned negative values for quantities that should have been positive. The schema was correct (they were numbers), but the values didn’t make business sense. Now I always consider the business logic, not just the data type.

4. Authentication Behavior

How does the API handle auth scenarios?

  • Does it return proper tokens when credentials are valid?
  • Does it correctly reject invalid credentials?
  • Do expired tokens get refused?
  • Are protected endpoints actually protected?

5. Edge Case Handling

How does the API behave in non-standard situations?

  • What happens with missing parameters?
  • How does it handle extremely large or small values?
  • What about special characters or emoji in text fields?
  • How does pagination work with zero results or very large result sets?

Here’s a real example where edge case testing found an issue. The API returned 200 OK when searching for a non-existent user, but returned an empty array rather than an appropriate error:

// Request: GET /api/users?name=ThisUserDoesNotExist
// Response: 200 OK
{
  "users": [],
  "total": 0
}

This isn’t technically wrong, but it makes error handling harder for frontend developers. A 404 Not Found would have been more appropriate and easier to work with.

Tech Toolkit of the Week: Postman Test Scripts

The real power of Postman comes from its testing capabilities. I write test scripts for all my API validations using Postman’s built-in testing framework. Here’s a simple example:

// Test for successful user creation
pm.test("Status code is 201 Created", function() {
    pm.response.to.have.status(201);
});

// Test that response contains the created user with correct data
pm.test("Response contains created user with valid data", function() {
    const responseJson = pm.response.json();
    
    pm.expect(responseJson).to.have.property('id');
    pm.expect(responseJson.id).to.be.a('number');
    
    pm.expect(responseJson).to.have.property('email');
    pm.expect(responseJson.email).to.match(/^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$/);
    
    // Store the created user ID for use in future requests
    pm.environment.set("createdUserId", responseJson.id);
});

These tests run automatically when the request completes, giving me an immediate pass/fail status for each validation check.

What I Actually Log and Flag

When I find issues during API testing, here’s what I include in my reports:

  1. Endpoint details: The exact URL, HTTP method, and any parameters
  2. Request details: Headers, body data, and authentication used
  3. Expected behavior: What should have happened according to requirements
  4. Actual behavior: The response received, with relevant parts highlighted
  5. Severity assessment: How impactful this issue would be in production
  6. Reproducibility steps: Clear instructions so developers can recreate the issue

Here’s an excerpt from an actual bug report I created:

Bug: User creation API accepts invalid email formats
Endpoint: POST /api/users
Severity: Medium
Steps to reproduce:
1. Send POST request to /api/users
2. Include body: {"name": "Test User", "email": "not-an-email"}
3. Observe 201 Created response

Expected: API should return 400 Bad Request due to invalid email format
Actual: API accepts invalid email and returns 201 Created

Side Hustle Strategy: Keep a personal “bug journal” of API issues you find during your learning projects. These make excellent interview talking points and show you understand the difference between functioning code and production-ready APIs.

The Real Lesson: Testing What Matters, Not Just What’s Easy

The evolution of my API testing approach has taught me a valuable lesson: thorough testing isn’t about doing more testing—it’s about testing what matters.

Status codes are just the door to the API house. You have to go inside, check all the rooms, test if the appliances work, and make sure the foundation is solid.

This approach takes more time, but it’s the difference between surface-level testing and the kind of quality assurance that prevents production issues and earns respect from development teams.

My Journey Continues

I’m still learning and improving my API testing skills every day. What I’ve shared here is where I am now, not the end point. I expect to look back at this post in a year and find new ways I’ve evolved my approach.

That’s what I love about the QA journey – there’s always something new to learn, another edge case to consider, or a better way to validate.

Ask a Tester: Community Q&A

Q: How do you balance thorough API testing with time constraints?

A: I prioritize based on risk and impact. Critical user flows (like authentication, payments, or data creation) get full validation. For lower-risk endpoints, I might focus on schema validation and a few key data checks. It’s about making informed decisions about test coverage rather than treating all endpoints equally.


What about you? What did you overlook when you first started API testing? Have you had any “aha” moments that changed your approach? Share your experiences in the comments!

#APITesting #Postman #TestLikeAGirl #QAJourney #TechDeepDive

Join our community! Sign up for the weekly TestLikeAGirl newsletter for exclusive transition tips, job opportunities, and virtual coffee chats with women who’ve successfully made the leap into tech.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *