Senior Software QA/QC Analyst
CDM Smith
Job Description
CDM Smith is seeking a Senior Data Quality Assurance and Quality Control Analyst to join our Digital
Engineering Solutions team. This individual will be part of the Development group within the Digital
Engineering Solutions team, helping design and implement data validation processes that ensure high
data integrity and support robust data pipelines. An ideal candidate will be a critical thinker, proactive, and
highly interested in applying new technologies and methods for data quality improvement. This individual
must showcase exceptional abilities in developing and implementing QA/QC solutions for AEC initiatives
that involve complex datasets and advanced analytics. As a member of the Digital Engineering Solutions
team, the individual will be engaged in research and development, providing guidance and oversight into
data quality standards and practices at CDM Smith, as well as participating in the evaluation and adoption
of innovative technologies and methodologies that arise from across the company. The ideal candidate
must have extensive experience in data quality management and a demonstrated commitment to
continuous improvement and measurable results.
The following are the key responsibilities for this position:
- Take responsibility for the project and work independently in a collaborative environment.
- Evaluate data across various dimensions, including accuracy, completeness, consistency, validity,
timeliness, uniqueness, and integrity.
- Establish quality criteria for data accuracy, completeness, consistency, validity, and AI model
performance.
- Test data ingestion, transformation (ETL/ELT), and outputs to ensure source-to-target integrity.
- Verify that datasets used for AI/ML are accurate, representative, unbiased, and fit for purpose.
- Validate model metrics against defined business and technical thresholds, including accuracy,
precision, recall, and stability.
- Identify and assess bias, fairness issues, and unintended impacts in data and AI model outcomes.
- Implement automated tests for data validation, model regression, and pipeline checks within CI/CD
and MLOps workflows.
- Log, prioritize, and track data and AI defects; perform root cause analysis and corrective actions.
- Validate adherence to data governance, privacy, security, and AI regulatory requirements.
- Track data drift, model drift, anomalies, and performance degradation post-deployment.
- Communicate quality status, risks, and recommendations clearly to stakeholders before and after
releases.
- Design and develop testing plans, test cases, and test scripts to evaluate data quality.
- Identify, document, and report defects, inconsistencies, and inaccuracies.
- Maintain and update testing documentation and report on test outcomes. Document key data
processes and transformations.
- Assist with quality improvement initiatives and recommend improvements to data quality processes
and procedures.
- Assist developers, business stakeholders, and data governance teams in meeting data quality
standards.
- Knowledge of the Agile/Waterfall approach and how quality assurance fits into it.
- Create automated testing solutions from scratch.
- Develop testing scenarios by analyzing feature requirements for the purpose of estimating testing
effort. Track defects, analyze and communicate test results, and engage in daily QA activities.
Skills and Abilities:
- Ability to take ownership of the project and work independently in a team environment.
- Ability to analyze problems, perform root cause analysis, and develop solutions is essential for
resolving data issues.
- Proficient in PyTest/unit test frameworks used for ML testing & automation
- Knowledge of programming languages such as C#, Python, or R for data process automation and
analysis.
- Proficient in automated test suite development for ML models
- Working knowledge of ML-specific API testing (e.g., inference endpoints)
- Working knowledge of automated metrics validation for model outputs
- Expertise in the principles and testing procedures.
- Working knowledge of data validation frameworks such as Great Expectations
- Working knowledge of ML-oriented data pipeline tools .
- Working knowledge of validation for massive, ML-ready datasets
- Proficient in ML-specific QA strategy (model tests, feature tests, data contracts, drift tests).
- Working knowledge of CI/CD integration for ML (MLOps).
- Proficient in model-centric regression testing.
- Proficient in testing lineage, reproducibility, and experiment tracking.
- Proficient in developing and maintaining test plans, test scenarios, test cases, test defect tracking,
summary reporting, and test scripts to perform thorough testing and validate the data, based on
business requirements.
- Familiarity with Extract, Transform, Load (ETL) tools to move and transform data.
- Working knowledge of AI governance framework, ethical AI & compliance, bias monitoring policies,
and model traceability requirements
- Working knowledge of debugging tools (Fiddler).
- Proficiency with version control practices using Git, including branching, collaboration, reviewing pull
requests, and resolving merge conflicts.
- Knowledge of build servers. (Azure CICD Pipeline, Jenkins, and Cruise Control).
- Working knowledge of source control systems (DevOps, GitHub, SVN, or TFS).
- Working experience in release management and bug tracking tools (e.g., Zephyr, etc.).
- Knowledge of HTML5, CSS, JavaScript, Angular Bootstrap, SQL, SharePoint, JSON, or XML objects.
- Knowledge of API testing using Postman and Swagger UI.
- Knowledge of Azure cloud infrastructure and its capabilities.
- Strong verbal and written communication skills are needed to explain findings to both technical and
non-technical stakeholders.
- A willingness to learn new tools and adapt to evolving technologies is important for long-term career
growth.