Playwright with Azure Devops Pipeline (Self-hosted) and Slack notification
My test case involves using the Playwright JavaScript API for a book management application. The tests require a GraphQL server running in the background to handle requests and responses.
As a QA automation engineer, it’s our job to make sure everything works smoothly. To keep things in check, we rely on CI/CD pipelines to track test results. In this case, we’re setting up an Azure DevOps pipeline to handle it.
Considering cost efficiency, we’ll run the Azure pipeline on a self-hosted agent (local machine). This approach not only reduces costs compared to Microsoft-hosted agents but also provides performance benefits such as:
- Reduced queue times since the pipeline runs directly on the local agent.
- Faster builds due to the ability of self-hosted agents to cache dependencies, Docker layers, or large files locally, eliminating the need to start from scratch or wait in a queue during peak times.
The Azure DevOps pipeline structure will include:
- Starting the GraphQL server (Apollo Server).
- Running the Playwright tests against the server.
- Storing the test results as an artifact in the pipeline.
- Reporting the test results via Slack notifications.
Let’s set up Azure DevOps to run the pipeline step by step:
Step 1: Create an Organization in Azure Devops
Step 2: Create project name
Step 3: You can either push your code to this repo or import from your github repository here
Step 4: Before running your pipeline, you go to project settings below to setup agent pool first
Step 5: At ‘Agent pools’ below Pipeline section, click ‘Add pool’ button and select ‘Self-hosted’ and name ‘local’ and then click ‘Create’ button
Step 6: Click ‘New agent’ button under your pool name ‘local’
Step 7: It will show instructions on how to setup on each os platform; in our case use ‘macOS’
- Click ‘Download’ button under download the agent
- Do follow steps that need to unpack agent file and run
./config.sh
You might see many pop-ups on macOS asking for permission, which you can find in the ‘Privacy & Security’ setting.
Step 8: During setup, it might ask you for server url and PAT
The server url is https://dev.azure.com/{your-organization}.and the PAT, you can set via user settings here
You might see your local agent show up in your local pool like this with status offline after config. complete
Start your agent by ./run.sh
After starting the agent, the status will change to be online and you are ready to run pipeline with your local machine now
Step 9: Click ‘New pipeline’ button and setup pipeline to pull code from your repository on Azure, You may setup your pipeline with Node.js if you use Playwright javascript
In the review yaml file step, you may see pool image default like this, which is Microsoft-Hosted Agents
But in our case, which is self-hosted, we need to put our pool name here
Here is my ‘azure-pipelines.yaml’
trigger:
- main
# Using local pool instead of Microsoft-hosted agents
pool:
name: local
variables:
NODE_VERSION: '22.x'
SERVER_PORT: 4000
steps:
- task: NodeTool@0
inputs:
versionSpec: $(NODE_VERSION)
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'Install Dependencies'
- script: |
npx playwright install --with-deps
displayName: 'Install Playwright'
# Add this before running tests
- script: |
npm install playwright-ctrf-json-reporter --save-dev
displayName: 'Install CTRF Reporter'
- script: |
# Create directories for test results
mkdir -p ctrf playwright-report
# Start server and save PID
node src/server.js &
SERVER_PID=$!
# Wait for server to start
echo "Waiting for server to start..."
sleep 10
# Run tests with both reporters and save exit code
DEBUG=pw:api npx playwright test tests/api/book_management/ \
--reporter=playwright-ctrf-json-reporter,html
# Store test status before killing server
TEST_EXIT_CODE=$?
# Kill server
kill $SERVER_PID || true
# List contents of results directories
echo "Contents of ctrf directory:"
ls -la ctrf/
echo "Contents of playwright-report directory:"
ls -la playwright-report/
exit $TEST_EXIT_CODE
displayName: 'Run Tests'
- script: |
# Check if report file exists
if [ ! -f "ctrf/ctrf-report.json" ]; then
echo "Test result file not found. Creating empty report..."
mkdir -p ctrf
echo '{"results":{"summary":{"tests":0,"passed":0,"failed":0,"start":0,"stop":0}}}' > ctrf/ctrf-report.json
fi
# Parse results with error handling
echo "Parsing test results..."
TOTAL_TESTS=$(jq -r '.results.summary.tests // 0' ctrf/ctrf-report.json)
PASSED_TESTS=$(jq -r '.results.summary.passed // 0' ctrf/ctrf-report.json)
FAILED_TESTS=$(jq -r '.results.summary.failed // 0' ctrf/ctrf-report.json)
# Calculate duration with error handling
START_TIME=$(jq -r '.results.summary.start // 0' ctrf/ctrf-report.json)
STOP_TIME=$(jq -r '.results.summary.stop // 0' ctrf/ctrf-report.json)
if [ "$START_TIME" != "0" ] && [ "$STOP_TIME" != "0" ]; then
DURATION_MS=$((STOP_TIME - START_TIME))
DURATION_SECONDS=$((DURATION_MS / 1000))
DURATION="${DURATION_SECONDS} sec"
else
DURATION="0 sec"
fi
# Set pipeline variables with error checking
echo "##vso[task.setvariable variable=TOTAL_TESTS]${TOTAL_TESTS:-0}"
echo "##vso[task.setvariable variable=PASSED_TESTS]${PASSED_TESTS:-0}"
echo "##vso[task.setvariable variable=FAILED_TESTS]${FAILED_TESTS:-0}"
echo "##vso[task.setvariable variable=DURATION]${DURATION}"
# Output results for debugging
echo "Test Results Summary:"
echo "Total Tests: $TOTAL_TESTS"
echo "Passed Tests: $PASSED_TESTS"
echo "Failed Tests: $FAILED_TESTS"
echo "Duration: $DURATION"
displayName: 'Parse Test Results'
- task: PublishPipelineArtifact@1
inputs:
targetPath: 'playwright-report'
artifact: 'playwright-report'
publishLocation: 'pipeline'
condition: succeededOrFailed()
displayName: 'Publish Test Report'
- script: |
# Get branch name by removing refs/heads/
BRANCH_NAME=$(echo "$(Build.SourceBranch)" | sed 's/refs\/heads\///')
BUILD_URL="$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_build/results?buildId=$(Build.BuildId)"
# Create JSON payload
JSON_PAYLOAD=$(cat << EOF
{
"text": "Test Results from branch: $BRANCH_NAME",
"attachments": [
{
"color": "#36a64f",
"fields": [
{
"title": "Total Tests",
"value": "$(TOTAL_TESTS)",
"short": true
},
{
"title": "Passed",
"value": "$(PASSED_TESTS)",
"short": true
},
{
"title": "Failed",
"value": "$(FAILED_TESTS)",
"short": true
},
{
"title": "Duration",
"value": "$(DURATION)",
"short": true
},
{
"title": "Report",
"value": "$BUILD_URL",
"short": false
}
]
}
]
}
EOF
)
# Send to Slack using curl
curl -X POST \
-H 'Content-type: application/json' \
--data "$JSON_PAYLOAD" \
$(slackWebhookUrl)
displayName: 'Send Slack Notification'
condition: succeededOrFailed()
env:
SLACK_WEBHOOK_URL: $(slackWebhookUrl)Step 10: Run your pipeline
After job completed, you will see artifact file here
The result after downloading the artifact
At the slack notification result
Pre-requisite before running the pipeline about slack integration
- Get the slack webhook url from ‘Incoming WebHooks’ app in slack
2. Click add ‘variable’ in your pipeline
name your variable as ‘slackWebhookUrl’ and paste your webhook url there
Let me explain the Azure DevOps pipeline variables scripting in detail.
First, let’s look at the build URL construction:
$buildUrl = "$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_build/results?buildId=$(Build.BuildId)"This URL is built using predefined Azure DevOps pipeline variables:
$(System.TeamFoundationCollectionUri): This is the base URL of your Azure DevOps organization, like "https://dev.azure.com/your-organization/"$(System.TeamProject): The name of your project in Azure DevOps$(Build.BuildId): A unique identifier for the current build
When combined, they create a URL that points directly to your build results, similar to how GitHub Actions provides a URL to each workflow run.
Azure DevOps provides many other useful predefined variables. Here are some commonly used ones:
- Build-related variables:
$(Build.SourceVersion): The Git commit ID$(Build.SourceBranch): The branch name (like "refs/heads/main")$(Build.Repository.Name): Your repository name
2. Pipeline-related variables:
$(Pipeline.Workspace): The working directory for your pipeline$(Agent.BuildDirectory): The directory containing all work folders$(Agent.OS): The operating system of the build agent
3. Release-related variables:
$(Release.EnvironmentName): The environment name (like "Production" or "Staging")$(Release.ReleaseId): Unique identifier for the release
You can use these variables in other parts of your pipeline too. For example, if you wanted to include the branch name in your Slack message:
$branchName = "$(Build.SourceBranch)".Replace('refs/heads/', '')
$body = @{
text = "Test Results from branch: $branchName"
# ... rest of your message structure
}Syntax for setting variables (##vso[task.setvariable])
echo "##vso[task.setvariable variable=TOTAL_TESTS]${TOTAL_TESTS:-0}"
echo "##vso[task.setvariable variable=PASSED_TESTS]${PASSED_TESTS:-0}"
echo "##vso[task.setvariable variable=FAILED_TESTS]${FAILED_TESTS:-0}"
echo "##vso[task.setvariable variable=DURATION]${DURATION}"In the Playwright report, I use both playwright-ctrf-json-reporter and html because it can extract summary result from json format easily here for show data in Slack notification.
You may want to see more detail about this library here
For more detail, I use an HTML report as an artifact to see the results of every test case.
The Playwright HTML report default output is playwright-report folder
- task: PublishPipelineArtifact@1
inputs:
targetPath: 'playwright-report'
artifact: 'playwright-report'
publishLocation: 'pipeline'
condition: succeededOrFailed()
displayName: 'Publish Test Report'As you can see, I can attach values inside HTML by using:
test.info().attach('Created Book', {
body: JSON.stringify(books, null, 2), // Format as JSON for readability
contentType: 'application/json'
});You can explain additional expected value on the top of the test case with the test annotations as well.
Thanks for reading, and I hope you found this article to be helpful.
