* wip
* add wip
* wip
* reuse action
* finish first draft
* fix tests
* cleanup
* Only compute adjustments when necessary
* Create hot-carrots-look.md
* address comments
* minor tweaks
* fix pay col
* fix test
* wip
* Dwip
* wip
* fix: adjustment typo
* fix: import
* fix: workflow imports
* wip: update test
* feat: upsert versioned adjustments when previewing order
* fix: revert unique codes change
* fix: order spec test with versioning
* wip: save
* feat: make adjustments work for preview and confirm flow, wip base repo filtering of older version adjustments
* fix: missing populate where
* wip: populate where loading versioned adjustments
* fix: filter out older adjustment versions
* temp: comment adjustments in repo
* test: add adjustment if no version
* wip: configure populate where in order base repository
* fix: rm manual filtering
* fix: revert base repo changes
* fix: revert
* fix: use order item version instead of order version
* fix: rm only in test
* fix: update case spec
* fix: remove sceanrio, wip test with draft promotion
* feat: test correct adjustments when disabling promotion
* feat: complex test case
* feat: test consecutive order edits
* feat: 2 promotions test case with a fixed promo
* feat: migrate existing order line item adjustments to order items latest version
* feat: update dep after merge
* wip: load adjustments separatley
* feat: adjustments collections
* fix: spread result, handle related entity case
* fix: update lock
* feat: make sure version is loaded, refactor, handle related entity case
* fix: check fields
* feat: loading adjustments for list and count
* fix: correct items version field
* fix: rm empty array
* fix: wip order modules spec
* fix: order module specs
* feat: preinit items adjustments
* fix: rm only
* fix: rm only
* chore: cleanup
* fix: migration files
* fix: dont change formatting
* fix: core package build
* chore: more cleanup
* fix: item update util
* fix: duplicate import
* fix: refresh adjustments for exchanges (#13992)
* wip: exchange adjustments
* feat: test - receive items
* feat: finish test case
* fix: casing
* fix(draft-orders, core-flows, orders) refresh adjustments for draft orders (#14025)
* wip: draft orders adjustments refresh
* feat: rewrite to use REPLACE action + test
* fix: rm only
* feat: cleanup old REPLACE actions
* feat: cleanup adjustemnts when 0 promotions
* wip: canceling draft order
* fix: make version arg optional
* fix: restore promotion links
* feat: test reverting on cancelation
* fix: address comments in tests
* wip: fix summary on preview
* fix: get pending diff on preview summary from total
* fix: revert pending diff change
---------
Co-authored-by: fPolic <mainacc.polic@gmail.com>
Co-authored-by: Frane Polić <16856471+fPolic@users.noreply.github.com>
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* Create lucky-poets-scream.md
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* dedupe snapshot this build
* split into 4 shard
* re configure packages integration tests
* re configure packages integration tests
* re configure packages integration tests
* re configure packages integration tests
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* reduce shard for packages
RESOLVES CORE-1153
**What**
- This pr mainly lay the foundation the caching layer. It comes with a modules (built in memory cache) and a redis provider.
- Apply caching to few touch point to test
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* Clarify test description and improve CI
* chore(): Upgrade mikro orm
* handle 'null' value for big number props
* 6.5.2
* remove only
* fix pricing module rule value
* switch select in strategy for balances
* revert to select in strategy for order module
* fix defining DML ManyToOne
* fix define relationship
* test fix
* more fixes
* change order strategy to balanced
* change order strategy to balanced
* prevent unnecessary manager fork
* revert generated www changes
* remove unnecessary changes
* Create real-cobras-deny.md
* address feedback
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
Glob 7 uses the `inflight` module, which leaks memory. Also, all other Medusa packages are using glob 10+. So upgraded the one used by the framework too.
Fixes: FRMW-2972
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
**What**
- Bumps the versions of Vite across the entire stack, to prevent an issue similar to what is described here: https://github.com/vitejs/vite/discussions/18271
Not entirely sure what was happening as I couldn't reproduce the issue, but Adrien faced the issue yesterday when working with local versions of our packages. It does appear as if the range we had before could lead to a version of Vite to be installed with said bug.