A number of our customers like to test their widget design and interface in a development environment prior to launching it in production.
There are a few different ways of doing this, each with its pros and cons.
- Separate inventory
- Separate activity
- Separate machine-learned models based on the inventory and activity
If you choose to go down this route, it's important to remember:
- The actual inventory items recommended will be from the development organization. In particular, they will be limited to items for which we have seen some activity on the development organization, which could be a small subset of items.
- The recommendations will be based purely on development organization, and therefore will be heavily swayed by the non-representative action of the small number of people accessing that organization.
With these caveats, this separate setup might still be good enough for you to test the recommendation display and integration details.
You can just have one organization for your production environment. In the development environment, you use the same key, thereby fetching production recommendations.
The main caveat here is that you should make sure that the development environment does not pollute the production activity or inventory streams. You also need to keep in mind that the recommendations we return may be suboptimal. We discuss these separately:
- Activity stream pollution: This is generally a non-issue since the volume of activity on the development site is small relative to the production site.
- Suboptimal recommendations (if URL structures do not match): If the URLs in the development environment do not match those in the production environment, we may not be able to correctly identify the user's context and therefore may return worse recommendations than we would in production.