Skip to content

Commit f48a4cb

Browse files
committed
1. Modify the test cases for llmperf based on actual usage
2. Update the instruction document for the testing framework
1 parent d608be8 commit f48a4cb

4 files changed

Lines changed: 409 additions & 373 deletions

File tree

test/README.md

Lines changed: 172 additions & 115 deletions
Original file line numberDiff line numberDiff line change
@@ -1,179 +1,236 @@
1-
# Pytest
2-
[简体中文](README_zh.md)
3-
A comprehensive Pytest testing framework featuring configuration management, database integration, performance testing, and HTML report generation.
1+
# Pytest Automation Testing Framework Guide
2+
[English](README.md) | [简体中文](README_zh.md)
43

5-
## 📋 Features
4+
This guide introduces an automation testing framework built on **pytest 7.0+**, integrating core capabilities such as **configuration management**, **database integration**, **performance testing**, and **HTML report generation**. It is suitable for unit testing, functional testing, and end-to-end (E2E) testing scenarios.
65

7-
- **Modern Testing Framework**: Complete test solution built on Pytest 7.0+
8-
- **Configuration Management**: YAML-based config with thread-safe singleton pattern
9-
- **Database Integration**: Built-in MySQL support with automatic result storage
10-
- **HTML Reports**: Auto-generated pytest HTML test reports
11-
- **Tagging System**: Multi-dimensional test tags (stage, feature, platform, etc.)
6+
---
127

13-
## 🗂️ Project Structure
8+
## 📋 Core Framework Features
149

15-
```
10+
- **Modern Testing Architecture**: Built on pytest 7.0+, compatible with Python 3.11+, and supports a rich plugin ecosystem.
11+
- **Centralized Configuration Management**: Thread-safe singleton-pattern configuration loading via YAML files.
12+
- **Database Integration**: Built-in PostgreSQL support for automatically persisting test results to a database; if no database is configured, results are saved locally.
13+
- **Visual Test Reporting**: Integrated with the pytest-html plugin to auto-generate clear and comprehensive HTML test reports.
14+
- **Multi-dimensional Test Tagging**: Supports categorizing and filtering test cases by dimensions such as test stage, feature module, and execution platform.
15+
16+
---
17+
18+
## 🗂️ Project Directory Structure
19+
20+
```text
1621
pytest_demo/
17-
├── common/ # Common modules
22+
├── common/ # Shared utility modules
1823
│ ├── __init__.py
19-
│ ├── config_utils.py # Configuration utilities
20-
│ ├── db_utils.py # Database utilities
21-
│ └── capture_utils # Return-value capture utilities
22-
├── results/ # Result storage folder
23-
├── suites/ # Test suites
24-
│ ├── UnitTest # Unit tests
25-
│ ├── Feature # Feature tests
24+
│ ├── config_utils.py # Configuration loading and management
25+
│ ├── db_utils.py # Database operation utilities
26+
│ └── capture_utils.py # Test data capture and export utilities
27+
├── results/ # Output directory for test results and logs
28+
├── suites/ # Test suite directory
29+
│ ├── UnitTest/ # Unit tests
30+
│ ├── Feature/ # Functional tests
2631
│ └── E2E/ # End-to-end tests
27-
│ └── test_demo_performance.py # Sample test file
28-
├── config.yaml # Main config file
29-
├── conftest.py # Pytest config
30-
├── pytest.ini # Pytest settings
31-
├── requirements.txt # Dependencies
32-
└── README.md # This doc (CN)
32+
├── config.yaml # Main configuration file (YAML format)
33+
├── conftest.py # Shared pytest configuration and fixture definitions
34+
├── pytest.ini # pytest runtime parameter configuration
35+
├── requirements.txt # Project dependencies
36+
└── README.md # Project documentation (this document)
3337
```
3438

39+
---
40+
3541
## 🚀 Quick Start
3642

37-
### Prerequisites
43+
### Environment Requirements
44+
45+
- Python 3.11 or higher
3846

39-
- Python 3.8+
40-
- MySQL 5.7+ (optional, for DB features)
41-
- Git
47+
### Installation and Setup
4248

43-
### Installation
49+
1. **Install Dependencies**
4450

45-
1. **Install dependencies**
4651
```bash
4752
pip install -r requirements.txt
4853
```
4954

50-
2. **Configure database** (optional)
55+
2. **(Optional) Configure Database**
56+
57+
Edit the database section in `config.yaml`:
5158

52-
Edit `config.yaml`:
5359
```yaml
54-
database:
55-
backup: "results/"
56-
host: "127.0.0.1"
57-
port: 3306
58-
name: "ucm_pytest"
59-
user: "root"
60-
password: "123456"
61-
charset: "utf8mb4"
60+
database:
61+
enabled: true # Enable database result storage (set to false to save to local directory)
62+
backup: "results/"
63+
host: "127.0.0.1"
64+
port: 5432
65+
name: "ucm_test"
66+
user: "postgres"
67+
password: "123456"
6268
```
6369
64-
3. **Run tests**
70+
3. **Run Tests**
71+
6572
```bash
73+
# Navigate to the project root directory
74+
cd /test
75+
6676
# Run all tests
6777
pytest
6878

69-
# Run tests by tag
70-
pytest --stage=1
71-
pytest --feature=performance
79+
# Run selected tests by markers (examples)
80+
pytest --stage=0 # Run unit tests only
81+
pytest --feature=test_uc_performance # Run a specific feature module
7282
```
7383

74-
## ⚙️ Configuration
84+
---
85+
86+
## ⚙️ Development Guidelines and Best Practices
87+
88+
### 1. Test Case Organization Conventions
89+
90+
All test cases must reside under the `suites/` directory and follow these naming conventions:
7591

76-
### config.yaml
92+
- **File Names**: Must start with `test_` (e.g., `test_login.py`)
93+
- **Class Names**: Must start with `Test` (e.g., `TestClassA`)
94+
- **Function Names**: Must start with `test_` (e.g., `test_valid_user_login`)
7795

78-
Full YAML-based config. Key sections:
96+
> pytest configuration (`pytest.ini`) has pre-set these discovery rules:
97+
>
98+
> ```ini
99+
> testpaths = suites
100+
> python_files = test_*.py
101+
> python_classes = Test*
102+
> python_functions = test_*
103+
> ```
79104
80-
- **reports**: Report settings (HTML, timestamp, etc.)
81-
- **database**: MySQL connection details
105+
---
82106
83-
## 🧪 Test Examples
107+
### 2. Multi-dimensional Marker System
84108
85-
### Basic functional test
109+
The framework supports the following marker types:
110+
111+
| Marker Type | Value Description |
112+
|------------|-------------------|
113+
| `stage` | `0`=Unit Test, `1`=Smoke Test, `2`=Regression Test, `3`=Release Test |
114+
| `feature` | Feature module identifier (e.g., `"uc_performance"`) |
115+
| `platform` | Execution platform (e.g., `"Ascend"`, `"CUDA"`) |
116+
117+
**Usage Example:**
86118
87119
```python
88-
# suites/E2E/test_demo_performance.py
89120
import pytest
90121
91-
@pytest.fixture(scope="module", name="calc")
92-
def calculator():
93-
return Calculator()
122+
@pytest.mark.stage(0)
123+
@pytest.mark.feature("uc_unit_test")
124+
@pytest.mark.platform("Ascend")
125+
def test_unit_case():
126+
assert True
127+
```
94128
95-
@pytest.mark.feature("mark")
96-
class TestCalculator:
97-
def test_add(self, calc):
98-
assert calc.add(1, 2) == 3
129+
**Run tests with specific markers:**
99130

100-
def test_divide_by_zero(self, calc):
101-
with pytest.raises(ZeroDivisionError):
102-
calc.divide(6, 0)
131+
```bash
132+
pytest --stage=0 --feature=capture_demo
103133
```
104134

105-
## 🏷️ Tagging System
135+
---
106136

107-
Multi-dimensional tags supported:
137+
### 3. Configuration and Parameter Management
108138

109-
### Stage tags
110-
- `stage(0)`: Unit tests
111-
- `stage(1)`: Smoke tests
112-
- `stage(2)`: Regression tests
113-
- `stage(3)`: Release tests
139+
- **Static configurations** (e.g., database connections, API endpoints) should be maintained in `config.yaml` and loaded via `config_utils`:
114140

115-
### Functional tags
116-
- `feature`: Module tag
117-
- `platform`: Platform tag (GPU/NPU)
141+
```python
142+
from common.config_utils import config_utils
118143

119-
### Usage
144+
db_config = config_utils.get_config("database")
145+
api_url = config_utils.get_nested_config("easyPerf.api.url")
146+
```
120147

121-
```bash
122-
# Run smoke tests and above
123-
pytest --stage=1+
148+
- **Dynamic test parameters** (e.g., input length, concurrency count) should use `@pytest.mark.parametrize`:
124149

125-
# Run by feature
126-
pytest --feature=performance
127-
pytest --feature=performance,reliability
150+
```python
151+
perf_scenarios = [
152+
(4000, 1024, 80),
153+
(2000, 512, 40)
154+
]
155+
scenario_ids = [f"in_{s[0]}-out_{s[1]}-con_{s[2]}" for s in perf_scenarios]
128156

129-
# Run by platform
130-
pytest --platform=gpu
131-
```
157+
@pytest.mark.feature("uc_performance_test")
158+
@pytest.mark.parametrize("in_tokens, out_tokens, concurrent", perf_scenarios, ids=scenario_ids)
159+
def test_performance(in_tokens, out_tokens, concurrent):
160+
res = run_case(in_tokens, out_tokens, concurrent)
161+
assert res is not None
162+
```
163+
164+
---
132165

133-
### HTML Reports
166+
### 4. Automatic Test Data Collection and Export
134167

135-
Auto-generated timestamped HTML reports:
136-
- Location: `reports/pytest_YYYYMMDD_HHMMSS/report.html`
137-
- Detailed results, errors, timing
138-
- Customizable title & style
168+
The framework supports automatic capture and persistence of test result data using the `@export_vars` decorator.
139169

140-
### Database Storage
170+
**Usage Requirements:**
171+
- The decorator must be the innermost decorator (closest to the function definition).
172+
- The function’s return value must be a dictionary containing at least one of the following fields:
173+
- `_name`: Target database table name (**required**)
174+
- `_data`: A dictionary or list of dictionaries (key-value pairs become table columns)
175+
- `_proj`: A list of dictionaries (for structured projection data)
141176

142-
If enabled, results are auto-saved to MySQL.
143-
To add new record types, ask DB admin to create tables; otherwise only local files are used.
177+
**Examples:**
144178

145-
Example:
146179
```python
147-
@pytest.mark.feature("capture") # Must be top decorator
180+
from common.capture_utils import export_vars
181+
import pytest
182+
183+
@pytest.mark.feature("capture_demo")
184+
@export_vars
185+
def test_capture_scalar():
186+
return {"_name": "demo_case", "_data": {"accuracy": 0.95, "loss": 0.05}}
187+
188+
@pytest.mark.feature("capture_demo")
148189
@export_vars
149-
def test_capture_mix():
150-
assert 1 == 1
190+
def test_capture_list():
191+
return {"_name": "demo_case", "_data": {"accuracy": [0.9, 0.95], "loss": [0.1, 0.05]}}
192+
193+
@pytest.mark.feature("demo")
194+
@export_vars
195+
def test_proj_data():
151196
return {
152-
'_name': 'demo',
153-
'_data': {
154-
'length': 10086, # single value
155-
'accuracy': [0.1, 0.2, 0.3], # list
156-
'loss': [0.1, 0.2, 0.3], # list
157-
}
197+
"_name": "demo_case",
198+
"_proj": [
199+
{"accuracy": 0.88, "loss": 0.12},
200+
{"accuracy": 0.92, "loss": 0.08}
201+
]
158202
}
159203
```
160204

161-
### Config Access
205+
> Data will be automatically written to the database or a local file based on the `database.enabled` setting in `config.yaml`.
206+
207+
---
208+
209+
### 5. Fixture Usage Guidelines
210+
211+
`@pytest.fixture` is used to provide reusable test dependencies (e.g., database connections, service instances).
212+
213+
**Example:**
162214

163-
Read settings easily:
164215
```python
165-
from common.config_utils import config_utils
166-
# Get config
167-
db_config = config_utils.get_config("database")
168-
api_config = config_utils.get_nested_config("easyPerf.api")
169-
```
216+
import pytest
217+
218+
class Calculator:
219+
def add(self, a, b): return a + b
220+
def divide(self, a, b): return a / b
170221

171-
## 🛠️ Development Guide
222+
@pytest.fixture(scope="module")
223+
def calc():
224+
return Calculator()
225+
226+
@pytest.mark.feature("calculator")
227+
class TestCalculator:
228+
def test_add(self, calc):
229+
assert calc.add(1, 2) == 3
172230

173-
### Adding New Tests
231+
def test_divide_by_zero(self, calc):
232+
with pytest.raises(ZeroDivisionError):
233+
calc.divide(6, 0)
234+
```
174235

175-
1. Create test files under `suites/` categories
176-
2. Apply appropriate tags
177-
3. Naming: `test_*.py`
178-
4. Use fixtures & marks for data management
179-
5. Keep custom marks concise and aligned with overall goals
236+
> **Tip**: Fixture scope (`scope`) can be set to `function` (default), `class`, `module`, or `session` to optimize resource reuse efficiency.

0 commit comments

Comments
 (0)