Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
291 changes: 291 additions & 0 deletions .github/workflows/ci-performance.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,291 @@
name: ci-performance
on:
pull_request:
branches:
- alpha
- beta
- release
- 'release-[0-9]+.x.x'
- next-major
paths-ignore:
- '**.md'
- 'docs/**'

env:
NODE_VERSION: 24.11.0
MONGODB_VERSION: 8.0.4

permissions:
contents: read
pull-requests: write
issues: write

jobs:
performance-check:
name: Benchmarks
runs-on: ubuntu-latest
timeout-minutes: 30

steps:
- name: Checkout base branch
uses: actions/checkout@v4
with:
ref: ${{ github.base_ref }}
fetch-depth: 1

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'

- name: Install dependencies (base)
run: npm ci

- name: Build Parse Server (base)
run: npm run build

- name: Run baseline benchmarks
id: baseline
run: |
echo "Checking if benchmark script exists..."
if [ ! -f "benchmark/performance.js" ]; then
echo "⚠️ Benchmark script not found in base branch - this is expected for new features"
echo "Skipping baseline benchmark"
echo '[]' > baseline.json
echo "Baseline: N/A (benchmark script not in base branch)" > baseline-output.txt
exit 0
fi
echo "Running baseline benchmarks..."
npm run benchmark > baseline-output.txt 2>&1 || true
echo "Benchmark command completed with exit code: $?"
echo "Output file size: $(wc -c < baseline-output.txt) bytes"
echo "--- Begin baseline-output.txt ---"
cat baseline-output.txt
echo "--- End baseline-output.txt ---"
# Extract JSON from output (everything between first [ and last ])
sed -n '/^\[/,/^\]/p' baseline-output.txt > baseline.json || echo '[]' > baseline.json
echo "Extracted JSON size: $(wc -c < baseline.json) bytes"
echo "Baseline benchmark results:"
cat baseline.json
continue-on-error: true

- name: Upload baseline results
uses: actions/upload-artifact@v4
with:
name: baseline-benchmark
path: |
baseline.json
baseline-output.txt
retention-days: 7

- name: Checkout PR branch
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 1
clean: true

- name: Setup Node.js (PR)
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'

- name: Install dependencies (PR)
run: npm ci

- name: Build Parse Server (PR)
run: npm run build

- name: Run PR benchmarks
id: pr-bench
run: |
echo "Running PR benchmarks..."
npm run benchmark > pr-output.txt 2>&1 || true
echo "Benchmark command completed with exit code: $?"
echo "Output file size: $(wc -c < pr-output.txt) bytes"
echo "--- Begin pr-output.txt ---"
cat pr-output.txt
echo "--- End pr-output.txt ---"
# Extract JSON from output (everything between first [ and last ])
sed -n '/^\[/,/^\]/p' pr-output.txt > pr.json || echo '[]' > pr.json
echo "Extracted JSON size: $(wc -c < pr.json) bytes"
echo "PR benchmark results:"
cat pr.json
continue-on-error: true

- name: Upload PR results
uses: actions/upload-artifact@v4
with:
name: pr-benchmark
path: |
pr.json
pr-output.txt
retention-days: 7

- name: Verify benchmark files exist
run: |
echo "Checking for benchmark result files..."
if [ ! -f baseline.json ] || [ ! -s baseline.json ]; then
echo "⚠️ baseline.json is missing or empty, creating empty array"
echo '[]' > baseline.json
fi
if [ ! -f pr.json ] || [ ! -s pr.json ]; then
echo "⚠️ pr.json is missing or empty, creating empty array"
echo '[]' > pr.json
fi
echo "baseline.json size: $(wc -c < baseline.json) bytes"
echo "pr.json size: $(wc -c < pr.json) bytes"

- name: Store benchmark result (PR)
uses: benchmark-action/github-action-benchmark@v1
if: github.event_name == 'pull_request' && hashFiles('pr.json') != ''
continue-on-error: true
with:
name: Parse Server Performance
tool: 'customSmallerIsBetter'
output-file-path: pr.json
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: false
save-data-file: false
alert-threshold: '110%'
comment-on-alert: true
fail-on-alert: false
alert-comment-cc-users: '@parse-community/maintainers'
summary-always: true

- name: Compare benchmark results
id: compare
run: |
node -e "
const fs = require('fs');

let baseline, pr;
try {
baseline = JSON.parse(fs.readFileSync('baseline.json', 'utf8'));
pr = JSON.parse(fs.readFileSync('pr.json', 'utf8'));
} catch (e) {
console.log('⚠️ Could not parse benchmark results');
process.exit(0);
}

// Handle case where baseline doesn't exist (new feature)
if (!Array.isArray(baseline) || baseline.length === 0) {
if (!Array.isArray(pr) || pr.length === 0) {
console.log('⚠️ Benchmark results are empty or invalid');
process.exit(0);
}
console.log('# Performance Benchmark Results\n');
console.log('> ℹ️ Baseline not available - this appears to be a new feature\n');
console.log('| Benchmark | Value | Details |');
console.log('|-----------|-------|---------|');
pr.forEach(result => {
console.log(\`| \${result.name} | \${result.value.toFixed(2)} ms | \${result.extra} |\`);
});
console.log('');
console.log('✅ **New benchmarks established for this feature.**');
process.exit(0);
}

if (!Array.isArray(pr) || pr.length === 0) {
console.log('⚠️ PR benchmark results are empty or invalid');
process.exit(0);
}

console.log('# Performance Comparison\n');
console.log('| Benchmark | Baseline | PR | Change | Status |');
console.log('|-----------|----------|----|---------| ------ |');

let hasRegression = false;
let hasImprovement = false;

baseline.forEach(baseResult => {
const prResult = pr.find(p => p.name === baseResult.name);
if (!prResult) {
console.log(\`| \${baseResult.name} | \${baseResult.value.toFixed(2)} ms | N/A | - | ⚠️ Missing |\`);
return;
}

const baseValue = parseFloat(baseResult.value);
const prValue = parseFloat(prResult.value);
const change = ((prValue - baseValue) / baseValue * 100);
const changeStr = change > 0 ? \`+\${change.toFixed(1)}%\` : \`\${change.toFixed(1)}%\`;

let status = '✅';
if (change > 20) {
status = '❌ Much Slower';
hasRegression = true;
} else if (change > 10) {
status = '⚠️ Slower';
hasRegression = true;
} else if (change < -10) {
status = '🚀 Faster';
hasImprovement = true;
}

console.log(\`| \${baseResult.name} | \${baseValue.toFixed(2)} ms | \${prValue.toFixed(2)} ms | \${changeStr} | \${status} |\`);
});

console.log('');
if (hasRegression) {
console.log('⚠️ **Performance regressions detected.** Please review the changes.');
} else if (hasImprovement) {
console.log('🚀 **Performance improvements detected!** Great work!');
} else {
console.log('✅ **No significant performance changes.**');
}
" | tee comparison.md

- name: Upload comparison
uses: actions/upload-artifact@v4
with:
name: benchmark-comparison
path: comparison.md
retention-days: 30

- name: Prepare comment body
if: github.event_name == 'pull_request'
run: |
echo "## Performance Impact Report" > comment.md
echo "" >> comment.md
if [ -f comparison.md ]; then
cat comparison.md >> comment.md
else
echo "⚠️ Could not generate performance comparison." >> comment.md
fi
echo "" >> comment.md
echo "<details>" >> comment.md
echo "<summary>📊 View detailed results</summary>" >> comment.md
echo "" >> comment.md
echo "### Baseline Results" >> comment.md
echo "\`\`\`json" >> comment.md
cat baseline.json >> comment.md
echo "\`\`\`" >> comment.md
echo "" >> comment.md
echo "### PR Results" >> comment.md
echo "\`\`\`json" >> comment.md
cat pr.json >> comment.md
echo "\`\`\`" >> comment.md
echo "" >> comment.md
echo "</details>" >> comment.md
echo "" >> comment.md
echo "*Benchmarks ran with ${BENCHMARK_ITERATIONS:-100} iterations per test on Node.js ${{ env.NODE_VERSION }}*" >> comment.md

- name: Comment PR with results
if: github.event_name == 'pull_request'
uses: thollander/actions-comment-pull-request@v2
continue-on-error: true
with:
filePath: comment.md
comment_tag: performance-benchmark
mode: recreate

- name: Generate job summary
if: always()
run: |
if [ -f comparison.md ]; then
cat comparison.md >> $GITHUB_STEP_SUMMARY
else
echo "⚠️ Benchmark comparison not available" >> $GITHUB_STEP_SUMMARY
fi
58 changes: 57 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,13 @@
- [Good to Know](#good-to-know)
- [Troubleshooting](#troubleshooting)
- [Please Do's](#please-dos)
- [TypeScript Tests](#typescript-tests)
- [TypeScript Tests](#typescript-tests)
- [Test against Postgres](#test-against-postgres)
- [Postgres with Docker](#postgres-with-docker)
- [Performance Testing](#performance-testing)
- [Adding Tests](#adding-tests)
- [Adding Benchmarks](#adding-benchmarks)
- [Benchmark Guidelines](#benchmark-guidelines)
- [Breaking Changes](#breaking-changes)
- [Deprecation Policy](#deprecation-policy)
- [Feature Considerations](#feature-considerations)
Expand Down Expand Up @@ -298,6 +302,58 @@ RUN chmod +x /docker-entrypoint-initdb.d/setup-dbs.sh

Note that the script above will ONLY be executed during initialization of the container with no data in the database, see the official [Postgres image](https://hub.docker.com/_/postgres) for details. If you want to use the script to run again be sure there is no data in the /var/lib/postgresql/data of the container.

### Performance Testing

Parse Server includes an automated performance benchmarking system that runs on every pull request to detect performance regressions and track improvements over time.

#### Adding Tests

You should consider adding performance benchmarks if your contribution:

- **Introduces a performance-critical feature**: Features that will be frequently used in production environments, such as new query operations, authentication methods, or data processing functions.
- **Modifies existing critical paths**: Changes to core functionality like object CRUD operations, query execution, user authentication, file operations, or Cloud Code execution.
- **Has potential performance impact**: Any change that affects database operations, network requests, data parsing, caching mechanisms, or algorithmic complexity.
- **Optimizes performance**: If your PR specifically aims to improve performance, adding benchmarks helps verify the improvement and prevents future regressions.

#### Adding Benchmarks

Performance benchmarks are located in [`benchmark/performance.js`](benchmark/performance.js). To add a new benchmark:

1. **Identify the operation to benchmark**: Determine the specific operation you want to measure (e.g., a new query type, a new API endpoint).

2. **Create a benchmark function**: Follow the existing patterns in `benchmark/performance.js`:
```javascript
async function benchmarkNewFeature() {
return measureOperation('Feature Name', async () => {
// Your operation to benchmark
const result = await someOperation();
}, ITERATIONS);
}
```

3. **Add to benchmark suite**: Register your benchmark in the `runBenchmarks()` function:
```javascript
console.error('Running New Feature benchmark...');
await cleanupDatabase();
results.push(await benchmarkNewFeature());
```

4. **Test locally**: Run the benchmarks locally to verify they work:
```bash
npm run benchmark:quick # Quick test with 10 iterations
npm run benchmark # Full test with 100 iterations
```

For new features where no baseline exists, the CI will establish new benchmarks that future PRs will be compared against.

#### Benchmark Guidelines

- **Keep benchmarks focused**: Each benchmark should test a single, well-defined operation.
- **Use realistic data**: Test with data that reflects real-world usage patterns.
- **Clean up between runs**: Use `cleanupDatabase()` to ensure consistent test conditions.
- **Consider iteration count**: Use fewer iterations for expensive operations (see `ITERATIONS` environment variable).
- **Document what you're testing**: Add clear comments explaining what the benchmark measures and why it's important.

## Breaking Changes

Breaking changes should be avoided whenever possible. For a breaking change to be accepted, the benefits of the change have to clearly outweigh the costs of developers having to adapt their deployments. If a breaking change is only cosmetic it will likely be rejected and preferred to become obsolete organically during the course of further development, unless it is required as part of a larger change. Breaking changes should follow the [Deprecation Policy](#deprecation-policy).
Expand Down
Loading
Loading