Skip to main content

Posts

Time is a Wildcard

In software testing (and elsewhere too), it is easy to blame the 'others' - unclear requirements, late code drops, unplanned changes or inherently complex solutions. Godin’s article gently forces a pivot in perspective: we should focus at improving our proactive strategies instead of complaining and spending time on impediments only. It challenges leaders to stop viewing time as a rigid constraint and start using it as a strategic lever to build better software quality. What is your take on it? 
Recent posts

Stress Testing AI Models

- Testing AI requires a fundamental shift in mindset - Stree Testing an AI Model differs from traditional Load Testing - Toolkit for AI Testing - New testing techniques to transition to AI Testing Link

Testing @ Vibe Coding

Vibe coding is rapid, AI-assisted development and has introduced a fundamentally software risk of subtle defects like inconsistent validation and security oversights. For testers, it means a strategy shift from scrutinizing implementation details to continuous behavioral verification and 'shifting wide'.  ​Success for testers will require prioritizing contract and integration coverage, leveraging AI for test generation, and strictly enforcing non-functional requirements of performance and security. QA must evolve into a high-speed steering system, and ensuring that rapid iteration methodology does not compromise user trust.  Link

Cultivating AI-Augmented Mindset

AI is not a shortcut for generating final outputs, instead teams should treat it as a 'thought partner' where AI-generated suggestions are rigorously tested and validated by human decision makung. By selecting appropriate use cases and embedding AI into existing workflows with clear guardrails in place, leadership can move the team away from mechanical tasks toward deeper risk analysis and more effective testing. (And equally applicable to all other teams as well - developers, architects, business analysts, product owners...!) Link

Testing Podcast - AI-Quality-Human Loop Episode

AI is a tool, not a solution for quality itself. Quality in loop comes from the human ability to question, validate, and refuse/reject to outsource judgment to a robot. AI supports the process with ideas, heuristics, and note-taking, but humans retain ownership of the charters, decisions, and debriefs. Testers add value most when they act as the accountable layer in this loop!  Testing Podcast

Azure App Testing: Unified Performance Hub

Microsoft recently launched Azure App Testing, a unified hub combining Azure Load Testing and Playwright Testing. The platform leverages AI-powered tools and VS Code extensions to automate browser tests and simulate global traffic, simplifying resource management and billing. Check it out at InfoQ

Creating a Test Framework using AI based tools (Cursor and Fine.AI)

Beth M worked with AI tools for testing and concludes that tools like Cursor and Fine.ai can significantly accelerate the development of test automation frameworks. However, human angle will continue to lay the important role to ensure the accuracy and reliability of the generated code. She recommends to continue using such tools for these can offer substantial time savings, but points out the lack of intuitiveness and a learning curve for users accustomed to traditional IDEs.   Link