An agile tester guides the entire product development team in a way that ensures that features of the product and the product as a whole behave as intended and without bugs. Agile testers work with product owners and other stakeholders to get a clear and common understanding of each feature in a way that everyone on the team is speaking the same language. An important step in this understanding is communicating how we know when a feature is done. We do this by fleshing out enough examples so that each feature can be tested for functional completeness as it's developed.
The tester does additional exploratory testing of the feature as soon as there is enough to be tested and again at least once again after the feature is complete. The purpose of the exploratory testing is: 1) to understand the feature and how it is implemented, 2) to find additional or unexpected behavior, and 3) to refine and define additional testcases for the feature. Unexpected features are discussed with the developer and product owner before deciding what to do about them. Often they are simply refinements on the original idea but they can also be either needed or unneeded additional functionality. Sometimes they are also just plain bugs. By addressing these refinements and making sure they are what the product owner or customer wants, we minimize the risk of creating the wrong product. By testing early and often, we provide as early feedback to programmers and other stakeholders about whether the feature is on track or not. By further developing testcases based on our exploratory testing, we define the intended product behavior and have regression tests to make sure the feature stays as defined. In short, we create an "executable specificaton" of the intended product behavior through a suite of tests based on input from the product owner or customer.
As testers, we are also responsible for ensuring that non-functional requirements (performance, scalability, security, usability, and etc.) are also met. These roles do not change from traditional testing except that we raise these issues as stories are selected for each iteration and we add stories for these requirements so they are condidered during implementation and testing of the remaining feature set.
How does agile testing differ from traditional software testing?
The major difference is that smaller sets of functionality are planned at a time and this increases the liklihood that the features are implemented exactly as planned. Because the "requirements" exist in the form of tests, it is easy to tell whether the software is meeting the specification or not. The tests still have to be understandable by non-programmers if they are going to be effective as software specification. This means a different style of writing tests. (An example of this different style of tests in described in the next "Resources" section of my website.
Can agile testing be done on non-agile development projects?
Sure, and this can be a way to help teams see the value of agile projects in general. You may run into problems where people see the tests becoming out of sync with the requirements (because the requirements are more static and less often updated if at all). Agile testing more clearly documents such differences and provides a means for resolving these differences. If reqirements are wrong and not fixed, and you code to these requirements, you may end up delivering a product that meets the requirements but not the actual customer needs. If the team wants to move to agile development, testing in an agile way can help the team to understand the benefits ahead of time.