[go: up one dir, main page]

0% found this document useful (0 votes)
20 views15 pages

Module 7 Notes

The document outlines the importance of API testing, detailing various types of tests such as unit, integration, and end-to-end tests, as well as the benefits of mocking machine learning models to enhance testing efficiency. It also discusses common API errors like 401 Unauthorized and 404 Not Found, providing debugging techniques and tips for resolving these issues. Additionally, it emphasizes the use of structured logging and exception handling in FastAPI to facilitate debugging.

Uploaded by

bino52104
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views15 pages

Module 7 Notes

The document outlines the importance of API testing, detailing various types of tests such as unit, integration, and end-to-end tests, as well as the benefits of mocking machine learning models to enhance testing efficiency. It also discusses common API errors like 401 Unauthorized and 404 Not Found, providing debugging techniques and tips for resolving these issues. Additionally, it emphasizes the use of structured logging and exception handling in FastAPI to facilitate debugging.

Uploaded by

bino52104
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

0.

Topics
Monday, June 9, 2025 9:46 PM

1. Importance of Testing APIs

2. Types of Tests

3. Mock ML Models

4. Common API Errors

5. Debugging Techniques

7. Testing and Debugging Page 93


1. Importance of Testing APIs
Monday, June 9, 2025 9:53 PM

• Testing is a non-negotiable aspect of modern API development


• It ensures that the web application functions correctly, is reliable, and behaves consistently even as you scale the applicat ion

1. Correctness of Application Logic:


○ Verifies that the API behaves as expected under different conditions
○ Reduces bugs in production and catches logical errors early
○ Ensures ML models make correct predictions and sensitive endpoints do not expose data due to logic errors

2. Protection Against Regressions:


○ Regressions refer to the bugs introduced into previously working code when new changes are made
○ Helps maintain backward compatibility and stability of endpoints
○ Use regression test suites to validate critical user flows like authentication, payments, data submissions, etc.

3. Safety Net During Refactoring:


○ Helps improve code readability, performance, and maintainability
○ Automated test cases verify that the refactored code still behaves the same way as before
○ Run the test suite after every major code change or cleanup

4. Enables Continuous Integration Pipelines:


○ Tests act as a gatekeeper to production; code won’t be merged unless all tests pass
○ Ensures stable deployments, boosts team confidence in merging PRs and reduces manual testing effort

7. Testing and Debugging Page 94


2. Types of Tests
Monday, June 9, 2025 10:08 PM

• Testing at different layers is essential for ensuring both small units and the system as a whole work correctly
• Each type of test has a distinct purpose, scope, and cost to run

1. Unit Tests:
○ Focuses on a single unit of logic—typically a function or method
○ Should not depend on external services like a database, file system, or network
○ Fast to run and ideal for TDD (Test-Driven Development)
○ When to use:
▪ Testing utility functions
▪ Business logic (e.g., tax calculation, discount rules)
▪ Schema validation (e.g., Pydantic model validators)

○ How pytest works:


▪ Use the command pytest tests/
▪ Pytest scans the tests/ folder for any python files that start with test_ or end with _test.py
▪ Inside those files, it looks for functions that start with test_
▪ After all tests are run, pytest summarizes the results

2. Integration Tests:
○ Involves multiple units working together: database + app, API + ML model, etc.
○ Typically uses in-memory or test database environments
○ More realistic than Unit Tests, but also slower
○ When to use:
▪ Testing API endpoints
▪ Testing FastAPI dependencies (like database or ML models)
▪ Validating middleware behavior
▪ Ensuring JWT authentication works across endpoints

3. End-to-End (E2E) Tests:


○ Simulates real user flows like login, submitting a form, or making a prediction
○ Runs the system as a black box (i.e., without knowing internal code)
○ May interact with UI, frontend, or API
○ When to use:
▪ Pre-deployment checks
▪ Validating critical user workflows like sign-up/login/purchase
▪ Testing deployment setup and network configurations

Summary:

7. Testing and Debugging Page 95


7. Testing and Debugging Page 96
3. Mock ML Models
Monday, June 9, 2025 11:14 PM

What is meant by Mocking ML Models?

• Mocking an ML model means replacing the actual ML model with a fake (or simulated) object in your tests
that behaves like the real model but doesn't perform any actual computation
• Instead of loading a large model file and calling its predict() method, you pretend (mock) that a model
exists and will return a known, controlled value

Why Mock ML Models?

1. ML Models Are Heavy to Load:


○ In production-grade ML APIs, models may be trained on large datasets and saved as
serialized .pkl, .joblib, or .h5 files
○ These files can be hundreds of MBs or even GBs in size and may contain complex architectures (e.g.,
deep learning models)
○ Deserializing and initializing these models can take several seconds or even minutes
○ Running this load operation for every test case makes the test slow, resource-intensive, and prone to
timeouts or memory exhaustion—not ideal for a CI/CD pipeline

2. Tests Should Focus on API Logic:


○ When testing an API endpoint (e.g., /predict), the goal is to check if the endpoint accepts input and
returns a response in the correct format
○ We do not care about whether the prediction itself is accurate
○ Hence, using a real model introduces unnecessary complexity and violates unit testing principles

3. Mocking Makes Tests Fast and Deterministic:


○ Mocking replaces the actual model object with a fake object that simulates the predict method
○ This fake object can return predefined outputs for known inputs, simulate exceptions to test error
handling and void real computation, making the test lightweight and fast
○ As a result, test becomes deterministic; the same input always yields the same response, which helps
maintain stability in test results

Summary of Mocking in ML API Testing:

7. Testing and Debugging Page 97


7. Testing and Debugging Page 98
3.1 - Demo
Monday, June 9, 2025 11:37 PM

File Structure:
- main.py
- model.py
- test_main.py
- training.ipynb

Control Flow:
○ Uses the patch() method from unittest.mock to mock the real ML model’s .predict() method
○ The mocked method always returns [99]
○ Sends a POST request to the /predict endpoint using TestClient
○ Asserts that the endpoint returns a 200 OK with a prediction of 99
○ Ensures that the mocked .predict() was called with the expected input

7. Testing and Debugging Page 99


4. Common API Errors
Tuesday, June 10, 2025 10:13 PM

1. 401 Unauthorized:

• Meaning: Authentication has failed — the server didn’t get a valid token or credentials

• Common Causes:
○ Missing or expired authentication token
○ Wrong API key or credentials
○ Bearer token not included or formatted incorrectly

• Debugging Tips:
○ Confirm if the token is valid and not expired
○ Ensure the Authorization header is set properly
○ Double-check API key/token permissions

2. 404 Not Found:

• Meaning: The URL or resource requested does not exist on the server

• Common Causes:
○ Typo in endpoint URL
○ The route isn't defined on the backend
○ Missing trailing slash (for some frameworks like Django REST Framework)

• Debugging Tips:
○ Recheck the API path spelling and method (GET/POST/etc.)
○ Confirm if the endpoint exists in the backend routing logic
○ Use API documentation or tools like Swagger/OpenAPI to test routes

3. 422 Unprocessable Entity:

• Meaning: The server understands the request, but it can't process the data because it
doesn't match the expected format or required fields are missing

7. Testing and Debugging Page 100


doesn't match the expected format or required fields are missing

• Common Causes:
○ Missing required fields in the request body
○ Wrong data types (e.g., sending a string instead of an integer)
○ Failing validation checks (e.g., string too short, email not valid)

• Debugging Tips:
○ Check the API documentation/schema carefully
○ Validate the payload locally using a schema validator (ex. Pydantic)
○ Use tools like Postman or curl to test with valid input

4. 500 Internal Server Error:

• Meaning: Something went wrong on the server side while processing the request

• Common Causes:
○ Unhandled exception in backend code (e.g., division by zero, missing ML model)
○ Database connection issues
○ Misconfiguration in server code or environment

• Debugging Tips:
○ Check server logs for traceback or error stack
○ Add proper exception handling (try-except blocks)
○ Ensure all dependencies and models are loaded before request handling
○ Use logging to catch unexpected exceptions

7. Testing and Debugging Page 101


5. Debugging Techniques
Tuesday, June 10, 2025 10:30 PM

FastAPI makes Debugging easier through:

• Structured Logging

• Exception Handling

• API testing tools like Postman/curl

• Development Mode Configurations

7. Testing and Debugging Page 102


5.1 - Logging
Tuesday, June 10, 2025 10:35 PM

• Logging is a fundamental debugging practice in development


• Instead of using print(), we use Python’s logging module, which is more powerful and configurable
• Logging helps with:
○ Track what part of the app was accessed
○ Log variables and errors
○ Retain logs in files or forward them to monitoring systems

Log Levels:
• DEBUG: Low-level system information
• INFO: Routine information like successful requests
• WARNING: Something unexpected happened
• ERROR: A serious problem
• CRITICAL: Severe errors that cause premature termination

7. Testing and Debugging Page 103


5.2 - Exception Handling
Tuesday, June 10, 2025 10:35 PM

FastAPI lets developers define custom exception handlers to:

• Catch unhandled errors


• Return consistent and informative error messages
• Log errors for debugging and auditing

7. Testing and Debugging Page 104


5.3 - curl
Tuesday, June 10, 2025 10:35 PM

• Send Requests with headers, payload, methods (GET, POST, PUT, DELETE)
• Inspect responses (status codes, headers, content)
• Helpful in isolating whether issues are in the backend or client

Syntax:

• -X POST: Specifies the HTTP method


• -H: Adds headers
• -d: Sends request body data

7. Testing and Debugging Page 105


5.4 - Configurations
Tuesday, June 10, 2025 10:35 PM

Running Uvicorn with --reload and --debug enables live reloading and better error visibility during development:

○ --reload: Automatically restarts the server when you change code (development only)
○ --debug: Enables verbose output and stack traces in logs (not for production)

Syntax:
uvicorn main:app --reload --debug

7. Testing and Debugging Page 106


5.5 - Summary
Tuesday, June 10, 2025 11:16 PM

7. Testing and Debugging Page 107

You might also like