Hi- A question for the group- if we roll out with a new product feature (let's say a new filter on an app) where on a pre-post basis we see a dip in conversion but after 2 months we decide to actually test the feature and the results come out as that the existing variant (before the pre-post) actually did worse than rollout- how much of this have you experienced in your testing? The question to sum up is how often have you observed users getting used to a new feature (even if suboptimal) and how do you quantify the effect of that?
A/B Testing Reads- What happens when you reverse test?
2
Hi- A question for the group- if we roll out with a new product feature (let's say a new filter on an app) where on a pre-post basis we see a dip in conversion but after 2 months we decide to actually test the feature and the results come out as that the existing variant (before the pre-post) actually did worse than rollout- how much of this have you experienced in your testing? The question to sum up is how often have you observed users getting used to a new feature (even if suboptimal) and how do you quantify the effect of that?
- savidge4
Next Topics on Trending Feed
-
2