Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Fixes: FRMW-2968
In this PR we have done two major things.
- First, we remove storing CSV contents within the workflow storage and neither store the JSON payloads to be created/updated in workflows. Earlier, they all were workflow inputs, hence were stored in the workflow
- Introduce a naive concept of chunks and process chunks one by one. The next PR making chunking a bit more robust while using streams, adding ability to resume from the failed chunk and so on.
> [!IMPORTANT]
> The new endpoint `/admin/product/imports` is not in use yet. But it will be after the next (final) PR.
## Old context in workflow storage

## New context in workflow storage

Fixes: FRMW-2965
In this PR we replace/remove the existing step to normalize a CSV file with the newly written CSV normalizer and also we validate the file contents further using a Zod schema.
I have duplicated the schema for now. But it is makes sense to re-use the schema for CSV validating and `/admin/products/batch`, then I can keep one source of truth under utils and re-export it. WDYT?
**Screenshots of some errors after validating the file strictly**


* fix(core-flows): use prduct title for line item title
* fix: module tests
* fix: http tests
* fix: display item subtitle instead of prod title as secondary text in line item
* fix: claim/exchange items
Move fulfillment workflow events to be with other workflow events.
Could be considered a breaking change for users using the previously `FulfillmentEvents` variable
* fix: return fulfillment creation
* chore: changeset
* fix: link
* fix: cancel claim flow
* chore: test fulfillment creation as a part of return lifecycle test
* fix: exchanges cancle flow
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* fix(core-flows): Add more fulfillment data
* fix(core-flows): Add more fulfillment data
* add more fields
---------
Co-authored-by: Riqwan Thamir <rmthamir@gmail.com>