Migrating from the Site Genesis cartridge to SFRA cartridge
As stated on the Site Genesis cartridge page, the Site Genesis catalog integration (also known as major version 3) is maintained as a legacy product.
The SFRA version (also known as major version 4) brings many improvements to provide better performance and stability, along with setting up the cartridge for future improvements.
This update contains many breaking changes. The aim of this guide is to help you through the migration from the Site Genesis cartridge to the SFRA cartridge.
New features
- The cartridge does not use the file system anymore. Instead, all operations are done in memory.
- Replaced all provided jobs with simpler versions that use the Salesforce chunk-oriented job architecture.
- Added support for reporting job execution progress (for example
8000 of 1000 was processed.
). - Added retry support for API requests (by default, it will retry 3 times).
- Improved test coverage across all cartridge files.
Breaking changes
Job names
The job names have changed on this update. The new names are:
ConstructorSyncProductData
->Constructor.SyncProducts
ConstructorSyncInventoryData
->Constructor.PatchProducts
ConstructorSyncCategoryData
->Constructor.SyncCategories
With those new names, we aim to make the cartridge more consistent with the overall platform.
Patch products job
Previously, the cartridge provided an opinionated job to update product inventories (ConstructorSyncInventoryData
), which sometimes caused confusion as users thought no other fields could be updated from this job.
The SFRA version provides a job that is completely agnostic to which fields are going to be updated (Constructor.PatchProducts
), and removes the default inventory transformations as those might change from customer to customer.
Job architecture change
The Site Genesis cartridge used step-oriented jobs, where each step was responsible for a specific task (for example read data, write temporary files, send data, etc). This meant that the job ran sequentially and had limitations such as not being able to send data at all until it had finished reading and writing all the data.
Now we're using chunk-oriented jobs, which adopts a stream-based approach, allowing us to read and send data in one single flow, while having benefits such as reporting the job progress to Salesforce so that you can track the execution on the jobs page (for example 8000 of 1000 was processed.
).
Additionally, previously the jobs consisted of 3 steps: writeData
, sendDeltas
and updateLastSyncDate
. Now, the jobs only contain one step, which actually performs the whole sync as a big stream of data.
Job step parameters
Job steps have also changed to host new parameters, making it easier to create new jobs and scale out to new locales, countries and so on. When uploading the metadata zip file to create the new jobs, you'll need to update the parameters to make sure you migrate the job configuration correctly.
Ingestion strategy parameter
Previously, the ingestion strategy was defined in three separate custom site preferences:
Constructor_ProductIngestionStrategy
Constructor_CategoryIngestionStrategy
Constructor_InventoryIngestionStrategy
Now since the jobs are much simpler, those custom site preferences were removed. Instead, you should now specify the ingestion strategy on the job step configuration. Also note that the option for categories was removed, since those should always do a FULL
sync by default.
The cartridge will still fallback to a DELTA
ingestion in case the sync was triggered with filters to avoid erasing the catalog.
Functions & general architecture
To make the cartridge easier to test and maintain, we've changed the function names and the overall architecture.
Previously functions were grouped by "behavior" and one file could contain and export multiple functions. This makes it harder to add overlays as you need to overlay a whole file, and you might not even want to overlay a specific function defined in that file.
Now we're aiming to have a more functional approach, where each function is exported from its own file and can be customized and tested independently.
The biggest changes here is how you transform data and how we abstract the job executions. Here's the highlights:
Product data
All product data functions are now extracted into separate files, so it's really easy to customize only what you need to change.
Previously you would find all transformations in customizeProductData.js
and would need to overlay the whole file. Now, you need to find the specific file that contains the behavior you want to overlay. For example, if you want to change how product URLs are resolved, you can overlay only one small function:
cartridges/link_constructor_connect/cartridge/scripts/helpers/products/getUrl.js
Facets and metadata
Previously we had one function for facets and one for metadata. This caused issues with performance, since you likely needed to make the same calculations twice to add a value to both facets and metadata.
Now, we have one function to build both facets and metadata, and two different implementations: one for products and one for variations.
Take a look at:
cartridges/link_constructor_connect/cartridge/scripts/helpers/products/getFacetsAndMetadata.js
cartridges/link_constructor_connect/cartridge/scripts/helpers/products/getFacetsAndMetadataForVariation.js
Wrapping sync jobs
To make sync jobs simpler and easier to expand, the SFRA version provides an abstraction called SyncAgent
.
This handles all the logic to implement a chunk-oriented job, including calculating the total count, reading and processing the data, sending the chunks to Constructor and also updating any needed customizations.
In case you implemented new jobs to sync new data (for example blog posts, recipes, etc), you'll want to port them over to use the SyncAgent
implementation.
It should be simple to use. You'll need to initialize it in beforeStep
and use it in the next steps.
For example, take a look at how it's initialized for the products job:
var syncAgent = null;
module.exports.beforeStep = function (rawParameters, stepExecution) {
var parseProductParameters = require('*/cartridge/scripts/jobs/sync/products/parseProductParameters');
var buildProductApiPayload = require('*/cartridge/scripts/jobs/sync/products/buildProductApiPayload');
var transformProduct = require('*/cartridge/scripts/helpers/products/transformProduct');
var ProductReader = require('*/cartridge/scripts/jobs/sync/products/productReader');
var SyncAgent = require('*/cartridge/scripts/jobs/sync/syncAgent');
var feedTypes = require('*/cartridge/scripts/constants/feedTypes');
var parameters = parseProductParameters(rawParameters, stepExecution);
syncAgent = SyncAgent.create({
reader: ProductReader.create({ parameters: parameters }),
buildCustomApiPayload: buildProductApiPayload,
transformer: transformProduct,
type: feedTypes.product,
parameters: parameters
});
};
Once initialized, it can handle any chunk-oriented job step for you. For example:
module.exports.getTotalCount = function () {
return syncAgent.getTotalCount();
};
Take a look at the sync products job for a full example:
cartridges/link_constructor_connect/cartridge/scripts/jobs/syncProducts.js
How to update
1. Installing the new cartridge
Taking in consideration the breaking changes listed, here's a to-do list you can follow to update the cartridge:
- Download the new cartridge version and upload it to your instance.
- Upload and import the metadata zip file to create the new jobs.
- From the old jobs, copy over the job step preferences to the new jobs so that you maintain the same configuration.
- Also copy over the scheduled runs for your jobs, to make sure they keep running on your periodic schedules.
- Delete the old jobs.
2. Migrating your customizations
You'll also need to migrate your customizations, if you have any. Ideally, you should be using the link_constructor_connect_custom
overlay cartridge already provided in the installation instead of directly modifying the files.
As mentioned earlier, function names have changed to make the cartridge easier to test and maintain. You likely customized facets and metadata, so you'll need to pay attention to the customizations you made to make sure they are migrated correctly.
3. Migrating your custom jobs
Finally, if you added any new jobs (for example a job to send content to Constructor), you should rewrite it using the SyncAgent
implementation to handle sending your data.
The previous implementation will likely not be compatible since we're using new endpoints on the Constructor API.
Take a look at any sync job to see how it is implemented, for example:
cartridges/link_constructor_connect/cartridge/scripts/jobs/syncProducts.js
In short, you'll need to:
- Create a new file under
scripts/jobs
to hold the new job. - Implement the new job behavior using the
SyncAgent
abstraction. - Add your job to
jobs.xml
, and runnpm run package:metadata-file
to generate the new metadata zip file. - Upload and import the metadata zip file to create the new job.
Updated about 2 months ago