Accessing Tests in DSALTA
Navigate to Compliance > Tests to view all tests for your active frameworks.
The tests page displays:
Test name and description
Current status (Passing, Failing, Needs Attention)
Source (integration name or "Manual")
Last run date and time
Next scheduled run
Associated controls and frameworks
Assigned owner
Filtering and Organizing Tests
Use filters to focus on specific tests:
By Status:
Show only failing tests
View tests needing attention
Hide passing tests to focus on problems
By Source:
Filter by integration (AWS, Google Workspace, GitHub)
Show only manual tests
View tests from specific systems
By Framework:
Display SOC 2 tests only
Show ISO 27001 requirements
Filter by active certifications
By Risk Level:
Critical tests first
High-priority items
Lower-risk tests
Running Automated Tests
Automated tests run on their configured schedule without intervention. However, you can:
Manually Trigger: Run a test immediately rather than waiting for the next scheduled execution
Open the test detail page
Click Run Test or Run Now
Wait for results (typically seconds to minutes)
Review updated status
Use Cases for Manual Runs:
After fixing a failed test
Before audit preparations
After configuration changes
To verify recent updates
Executing Manual Tests
For tests requiring human verification:
Opening the Test
Navigate to the manual test from the Tests page
Click to open the test detail view
Review the Success Criteria tab
Understanding Requirements
The test detail shows:
What needs to be verified
How to verify it
What evidence is required
Success criteria for passing
Performing Verification
Execute the verification activity:
Review required documents
Check system configurations
Interview relevant personnel
Observe physical controls
Verify process completion
Recording Results
Click Record Test Result or Mark Complete
Select outcome: Pass or Fail
Add notes explaining findings
Upload supporting evidence (screenshots, documents, attestations)
Set next test date
Submit the result
Test Detail View
Each test provides comprehensive information across multiple tabs:
Results Tab
Current and historical test outcomes:
Latest result and timestamp
Pass/fail status
Result details and data
Links to evidence
Source Data Tab
For automated tests, view:
Raw data collected from integrations
API responses
Configuration details
System states
This transparency helps troubleshoot unexpected results.
Controls Tab
Shows which controls this test verifies:
Control IDs and names
Framework associations
Control owners
Related policies
How to Remediate Tab
Step-by-step guidance for fixing failures:
Purpose of the test
How the test runs
Success criteria
Remediation steps
Console or CLI commands
Additional resources
Test Scheduling
Configure when manual tests run:
One-Time Tests: Execute once, no recurring schedule
Recurring Tests: Repeat at regular intervals
Weekly
Monthly
Quarterly
Annually
Custom intervals
Set appropriate frequencies based on:
Control risk level
Framework requirements
Organizational policies
Audit expectations
Test Ownership
Assign tests to team members responsible for:
Executing manual tests on schedule
Responding to automated test failures
Documenting test results
Maintaining test evidence
Owners receive notifications when:
Tests fail
Manual tests are due
Tests need attention
Evidence requires updates
Bulk Test Management
Manage multiple tests simultaneously:
Bulk Assignment: Assign multiple tests to one owner
Bulk Scheduling: Set schedules for related tests
Bulk Status Review: Mark multiple manual tests complete
Efficient for:
Quarterly access reviews across systems
Annual policy acknowledgment tests
Regular vendor assessment tests
Test Evidence Requirements
Tests require supporting evidence:
Automated Tests: Evidence collected automatically from integrations
System logs
Configuration exports
API responses
Historical data
Manual Tests: Evidence uploaded by test executors
Verification screenshots
Completed checklists
Sign-off documents
Attestation records
Insufficient evidence may prevent test results from satisfying audit requirements.
Test Failure Management
When tests fail:
Immediate Actions
Review failure details
Assess impact on compliance
Assign to the appropriate owner
Prioritize based on risk level
Investigation
Check if failure is legitimate or a false positive
Review what changed since the last passing result
Identify root cause
Determine the scope of impact
Remediation
Follow guidance in "How to Remediate" tab
Implement fixes
Document actions taken
Re-run the test to verify the resolution
Prevention
Update procedures to prevent recurrence
Enhance monitoring or alerting
Improve change management
Train team members
Test Performance Metrics
Track testing effectiveness:
Overall Pass Rate: Percentage of tests passing
Test Coverage: Percentage of controls with active tests
Mean Time to Remediation: How quickly failures are fixed
Test Execution Rate: Percentage of scheduled manual tests completed on time. Failure Frequency: Which tests fail most often
Use these metrics to improve your compliance program.
Test Documentation for Audits
Auditors review:
Test configurations and schedules
Historical test results
Remediation timelines
Evidence quality
Test coverage across controls
Well-maintained tests demonstrate mature, systematic control monitoring.
Custom Test Creation
Create organization-specific tests:
Navigate to the Tests page
Click Create Custom Test
Define test parameters:
Test name and description
Success criteria
Test frequency
Associated controls
Evidence requirements
Assign owner
Configure as manual or automated (if you have custom integration)
Custom tests address unique requirements not covered by standard framework tests.


