* wip
* add wip
* wip
* reuse action
* finish first draft
* fix tests
* cleanup
* Only compute adjustments when necessary
* Create hot-carrots-look.md
* address comments
* minor tweaks
* fix pay col
* fix test
* wip
* Dwip
* wip
* fix: adjustment typo
* fix: import
* fix: workflow imports
* wip: update test
* feat: upsert versioned adjustments when previewing order
* fix: revert unique codes change
* fix: order spec test with versioning
* wip: save
* feat: make adjustments work for preview and confirm flow, wip base repo filtering of older version adjustments
* fix: missing populate where
* wip: populate where loading versioned adjustments
* fix: filter out older adjustment versions
* temp: comment adjustments in repo
* test: add adjustment if no version
* wip: configure populate where in order base repository
* fix: rm manual filtering
* fix: revert base repo changes
* fix: revert
* fix: use order item version instead of order version
* fix: rm only in test
* fix: update case spec
* fix: remove sceanrio, wip test with draft promotion
* feat: test correct adjustments when disabling promotion
* feat: complex test case
* feat: test consecutive order edits
* feat: 2 promotions test case with a fixed promo
* feat: migrate existing order line item adjustments to order items latest version
* feat: update dep after merge
* wip: load adjustments separatley
* feat: adjustments collections
* fix: spread result, handle related entity case
* fix: update lock
* feat: make sure version is loaded, refactor, handle related entity case
* fix: check fields
* feat: loading adjustments for list and count
* fix: correct items version field
* fix: rm empty array
* fix: wip order modules spec
* fix: order module specs
* feat: preinit items adjustments
* fix: rm only
* fix: rm only
* chore: cleanup
* fix: migration files
* fix: dont change formatting
* fix: core package build
* chore: more cleanup
* fix: item update util
* fix: duplicate import
* fix: refresh adjustments for exchanges (#13992)
* wip: exchange adjustments
* feat: test - receive items
* feat: finish test case
* fix: casing
* fix(draft-orders, core-flows, orders) refresh adjustments for draft orders (#14025)
* wip: draft orders adjustments refresh
* feat: rewrite to use REPLACE action + test
* fix: rm only
* feat: cleanup old REPLACE actions
* feat: cleanup adjustemnts when 0 promotions
* wip: canceling draft order
* fix: make version arg optional
* fix: restore promotion links
* feat: test reverting on cancelation
* fix: address comments in tests
* wip: fix summary on preview
* fix: get pending diff on preview summary from total
* fix: revert pending diff change
---------
Co-authored-by: fPolic <mainacc.polic@gmail.com>
Co-authored-by: Frane Polić <16856471+fPolic@users.noreply.github.com>
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* Create lucky-poets-scream.md
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* dedupe snapshot this build
* split into 4 shard
* re configure packages integration tests
* re configure packages integration tests
* re configure packages integration tests
* re configure packages integration tests
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* reduce shard for packages
RESOLVES CORE-1153
**What**
- This pr mainly lay the foundation the caching layer. It comes with a modules (built in memory cache) and a redis provider.
- Apply caching to few touch point to test
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* Clarify test description and improve CI
* chore(): Upgrade mikro orm
* handle 'null' value for big number props
* 6.5.2
* remove only
* fix pricing module rule value
* switch select in strategy for balances
* revert to select in strategy for order module
* fix defining DML ManyToOne
* fix define relationship
* test fix
* more fixes
* change order strategy to balanced
* change order strategy to balanced
* prevent unnecessary manager fork
* revert generated www changes
* remove unnecessary changes
* Create real-cobras-deny.md
* address feedback
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
Glob 7 uses the `inflight` module, which leaks memory. Also, all other Medusa packages are using glob 10+. So upgraded the one used by the framework too.
Fixes: FRMW-2972
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
**What**
- Bumps the versions of Vite across the entire stack, to prevent an issue similar to what is described here: https://github.com/vitejs/vite/discussions/18271
Not entirely sure what was happening as I couldn't reproduce the issue, but Adrien faced the issue yesterday when working with local versions of our packages. It does appear as if the range we had before could lead to a version of Vite to be installed with said bug.
* ../../core/types/src/dml/index.ts
* ../../core/types/src/dml/index.ts
* fix: relationships mapping
* handle nullable foreign keys types
* handle nullable foreign keys types
* handle nullable foreign keys types
* continue to update product category repository
* fix all product category repositories issues
* fix product category service types
* fix product module service types
* fix product module service types
* fix repository template type
* refactor: use a singleton DMLToMikroORM factory instance
Since the MikroORM MetadataStorage is global, we will also have to turn DML
to MikroORM entities conversion use a global bucket as well
* refactor: update product module to use DML in tests
* wip: tests
* WIP product linkable fixes
* continue type fixing and start test fixing
* test: fix more tests
* fix repository
* fix pivot table computaion + fix mikro orm repository
* fix many to many management and configuration
* fix many to many management and configuration
* fix many to many management and configuration
* update product tag relation configuration
* Introduce experimental dml hooks to fix some issues with categories
* more fixes
* fix product tests
* add missing id prefixes
* fix product category handle management
* test: fix more failing tests
* test: make it all green
* test: fix breaking tests
* fix: build issues
* fix: build issues
* fix: more breaking tests
* refactor: fix issues after merge
* refactor: fix issues after merge
* refactor: surpress types error
* test: fix DML failing tests
* improve many to many inference + tests
* Wip fix columns from product entity
* remove product model before create hook and manage handle validation and transformation at the service level
* test: fix breaking unit tests
* fix: product module service to not update handle on product update
* fix define link and joiner config
* test: fix joiner config test
* test: fix joiner config test
* fix joiner config primary keys
* Fix joiner config builder
* Fix joiner config builder
* test: remove only modifier from test
* refactor: remove hooks usage from product collection
* refactor: remove hooks usage from product-option
* refactor: remove hooks usage for computing category handle
* refactor: remove hooks usage from productCategory model
* refactor: remove hooks from DML
* refactor: remove cruft
* order dml
* cleanup
* re add foerign key indexes
* wip
* chore: remove unused types
* wip
* changes
* rm raw
* autoincrement
* wip
* rel
* refactor: cleanup
* migration and models configuration adjustments
* cleanup
* number searchable
* fix random ordering
* fix
* test: fix product-category tests
* test: update breaking DML tests
* test: array assertion to not care about ordering
* fix: temporarily apply id ordering for products
* types
* wip
* WIP type improvements
* update order models
* partially fix types temporarely
* rel
* fix: recursive type issue
* improve type inference breaks
* improve type inference breaks
* update models
* rm nullable
* default value
* repository
* update default value handling
* fix unit tests
* WIP
* toMikroORM
* fix relations
* cascades
* fix
* experimental dml hooks
* rm migration
* serial
* nullable autoincrement
* fix model
* model changes
* fix one to one DML
* order test
* fix addresses
* fix unit tests
* Re align dml entity name inference
* update model table name config
* update model table name config
* revert
* update return relation
* WIP
* hasOne
* models
* fix
* model
* initial commit
* cart service
* order module
* utils unit test
* index engine
* changeset
* merge
* fix hasOne with fk
* update
* free text filter per entity
* tests
* prod category
* property string many to many
* fix big number
* link modules migration set names
* merge
* shipping option rules
* serializer
* unit test
* fix test mikro orm init
* fix test mikro orm init
* Maintain merge object properties
* fix test mikro orm init
* prevent unit test from connecting to db
* wip
* fix test
* fix test
* link test
* schema
* models
* auto increment
* hook
* model hooks
* order
* wip
* orm version
* request return field
* fix return configuration on order model
* workflows
* core flows
* unit test
* test
* base repo
* test
* base repo
* test fix
* inventory move
* locking inventory
* test
* free text fix
* rm timeout mock
* migrate fulfillment values
* v6.4.3
* cleanup
* link-modules update sql
* revert test
* remove fake timers
---------
Co-authored-by: adrien2p <adrien.deperetti@gmail.com>
Co-authored-by: Harminder Virk <virk.officials@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>