Announcement: Sergei Gapanovich and I will be going live (7:00 AM CT on Monday April 13th) shortly after this newsletter hits your inbox. We'll be discussing some of the articles in this issue and share our takes on how things are playing out in the Quality space with all the latest AI advances. Join us if you are able!
Learning to Code Still Matters
Learning to code still matters - but how you learn has changed. If you've never built a test automation framework before, jumping in with an LLM generating tests will save you time and cost you understanding. Knowing what good looks like, why Playwright's web-first assertions matter over plain expects, what best practices actually look like - that knowledge is what keeps AI-generated code from becoming a maintenance nightmare. Learn the fundamentals first, then let AI accelerate you.
If you are someone looking to get started with Playwright without previous automation experience the LinkedIn Learning Course: Learning Playwright is for you, this link should allow you to have free access to the course for a limited time, note if the link does't just reply to this email with your linkedIn Link and I can DM you a link that should provide temporary access.
For everyone else that has experience in coding or automation tooling, asking AI to explain concepts as you go is a great practice. Not sure what a block of code does? Have an LLM walk you through it. Learn by doing, but make sure you're learning - not just shipping code you don't understand.
Headlines & Launches
Hot Take: Page Objects Become Irrelevant with Agentic Test Writing
via Artem Bondar / LinkedIn
Artem Bondar argues that Page Object Model patterns lose their value when AI agents write tests, since agents can understand code and refactor locators without abstraction layers. The post sparked lively debate about token costs, maintainability, and whether some coding standards still help even agentic workflows.
My Thoughts on Self-Healing in Test Automation
via Bas Dijkstra / On Test Automation
Bas Dijkstra argues that self-healing test tools are band-aids hiding the real problem: teams not communicating when locators change. The real fix is closing collaboration gaps, not patching them with an algorithm.
Beyond Quality - Collaborative Research Community for Software Quality
via Vitaly Sharovatov / GitHub Discussions
A community focused on synthesizing software quality research into practical knowledge. Open discussions evolve into documented artifacts covering RAG evaluation, economics of testing, and QA approaches to hiring. An every 3 week podcast and QA Confessions series included.
Tools & Frameworks
WebdriverIO Execute CLI v1.1 - Dynamic Skills for AI Agents
via Vince Graics / LinkedIn
WebdriverIO Execute CLI v1.1 adds dynamic skills so AI assistants discover CLI usage on demand without bloated context. New execute, steps, and navigate/scroll commands, plus --config and --attach flags for connecting to running sessions. Integrates via skills.sh.
AI-Powered Test Impact Analysis via GitHub App + Claude Code
via Illya Bakurov (LinkedIn)
A custom GitHub App that uses Claude Code to analyze PR diffs and predict which tests are at risk of breaking - posting risk assessments to Slack and PR check runs in under 2 minutes. Built in one weekend with a GitHub App + Cloud Run + Mac Mini architecture.
Why I Stopped Writing Playwright Tests & Let Copilot Read the Jira Ticket and Create PR Instead
via Shivan Bharadwaj / Medium
A workflow connecting Jira, Copilot in agent mode, and MCP servers to auto-generate Playwright tests as reviewable PRs. The key shift: moving from "AI writes code" to "AI operates inside a contract" - fetching Jira context, reading repo rules, writing within approved test surfaces, and opening PRs that fit normal team review.
Open-Testing.AI - Open Standard for AI-Powered Testing
via open-testing.ai
Open source initiative providing testing agents, standardized bug formats, and community-driven prompts for AI-assisted software quality assurance. Aims to create open standards at the intersection of AI and testing.
Techniques & Tutorials
Playwright 1.59: AI Agent Debugger + Trace Analysis (New Features)
via Artem Bondar / YouTube
A walkthrough of Playwright 1.59's AI-focused features including the CLI debugger for agents and trace analysis. Covers how coding agents can now debug tests via playwright-cli and explore trace files to diagnose failures.
When the Compass Pointed Random
via The Quality Forge / Dragan Spiridonov
A vector search library was returning random results instead of correct nearest neighbors - and every dashboard stayed green. A compelling case study in why classical testing concepts (oracles, testability, observability) are essential for AI systems. The most dangerous failure mode isn't crashing - it's quietly succeeding with wrong values.
The Blind Confidence of AI-Generated Tests
via Test Pappy
A thoughtful take on why AI-generated tests create a false sense of confidence. The real value of writing tests is the understanding you build about the application - something no LLM can replicate. Use AI for scaffolding, but keep the thinking human.
Falling behind on test automation and AI adoption? DevClarity's QA Practice gets your team up to speed fast - with hands-on training, proven workflows, and measurable results within 30 days.
Research & Data
AI and Testing: LangChain Messages
via Tester Stories / Jeff Nyman
Part 5 of the AI and Testing series explores LangChain's message system (HumanMessage, AIMessage, SystemMessage) and how they create structured conversations with LLMs. Includes practical examples of conversation history management and few-shot prompting, framed through a testing lens - manually constructed histories are essentially test fixtures for validating AI behavior.
Quick Links
Is There Anything Left in the QA Toolstack That Hasn't Been Eaten by AI Agents? (r/QualityAssurance) - community discussion on which traditional QA tools are being replaced by AI, which are evolving, and what remains firmly human
Doing TDD with Claude Code Is Practically Impossible (Simon Jones / LinkedIn) - long TDD sessions with Claude Code degrade around cycle 30 as the model starts gaming tests instead of implementing real behavior, check the comments for some ideas on how to make it better
AI Doesn't Write Good Tests. It Writes Fast Tests. (Rex Jones II / LinkedIn) - practical guide on using agentic AI with Playwright: context engineering and framework architecture are the premium skills now
AI Chapter. Let's Go (Ministry of Testing) - new AI Chapter for community-driven AI learning in QA, planning events, workshops, and collaborative learning. (Requires MoT membership.)
If something in this issue made you think differently about how your team approaches AI in testing, pass it along. The best conversations about AI and QA are happening in Slack channels and stand-ups, not just newsletters.
Have something worth featuring? Reply and send it my way, I read every link.
Thanks for reading,
Butch Mayhew
