* wip
* add wip
* wip
* reuse action
* finish first draft
* fix tests
* cleanup
* Only compute adjustments when necessary
* Create hot-carrots-look.md
* address comments
* minor tweaks
* fix pay col
* fix test
* wip
* Dwip
* wip
* fix: adjustment typo
* fix: import
* fix: workflow imports
* wip: update test
* feat: upsert versioned adjustments when previewing order
* fix: revert unique codes change
* fix: order spec test with versioning
* wip: save
* feat: make adjustments work for preview and confirm flow, wip base repo filtering of older version adjustments
* fix: missing populate where
* wip: populate where loading versioned adjustments
* fix: filter out older adjustment versions
* temp: comment adjustments in repo
* test: add adjustment if no version
* wip: configure populate where in order base repository
* fix: rm manual filtering
* fix: revert base repo changes
* fix: revert
* fix: use order item version instead of order version
* fix: rm only in test
* fix: update case spec
* fix: remove sceanrio, wip test with draft promotion
* feat: test correct adjustments when disabling promotion
* feat: complex test case
* feat: test consecutive order edits
* feat: 2 promotions test case with a fixed promo
* feat: migrate existing order line item adjustments to order items latest version
* feat: update dep after merge
* wip: load adjustments separatley
* feat: adjustments collections
* fix: spread result, handle related entity case
* fix: update lock
* feat: make sure version is loaded, refactor, handle related entity case
* fix: check fields
* feat: loading adjustments for list and count
* fix: correct items version field
* fix: rm empty array
* fix: wip order modules spec
* fix: order module specs
* feat: preinit items adjustments
* fix: rm only
* fix: rm only
* chore: cleanup
* fix: migration files
* fix: dont change formatting
* fix: core package build
* chore: more cleanup
* fix: item update util
* fix: duplicate import
* fix: refresh adjustments for exchanges (#13992)
* wip: exchange adjustments
* feat: test - receive items
* feat: finish test case
* fix: casing
* fix(draft-orders, core-flows, orders) refresh adjustments for draft orders (#14025)
* wip: draft orders adjustments refresh
* feat: rewrite to use REPLACE action + test
* fix: rm only
* feat: cleanup old REPLACE actions
* feat: cleanup adjustemnts when 0 promotions
* wip: canceling draft order
* fix: make version arg optional
* fix: restore promotion links
* feat: test reverting on cancelation
* fix: address comments in tests
* wip: fix summary on preview
* fix: get pending diff on preview summary from total
* fix: revert pending diff change
---------
Co-authored-by: fPolic <mainacc.polic@gmail.com>
Co-authored-by: Frane Polić <16856471+fPolic@users.noreply.github.com>
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* Create lucky-poets-scream.md
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* chore(): Cleanup and organize deps
* dedupe snapshot this build
* split into 4 shard
* re configure packages integration tests
* re configure packages integration tests
* re configure packages integration tests
* re configure packages integration tests
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* update scripts
* reduce shard for packages
**What**
After lot of investigation, we finally found one of our performance regerssion point (see [here](https://github.com/mikro-orm/mikro-orm/issues/6905)), this pr downgrade mikro orm and move the strategy back to select in where needed
RESOLVES CORE-1153
**What**
- This pr mainly lay the foundation the caching layer. It comes with a modules (built in memory cache) and a redis provider.
- Apply caching to few touch point to test
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
**What**
The context reference is being mutated by the repository leading to an empty context. Also, the filter is built from the pricing context instead of pricing context -> context leading to always fetch all preferences all the time
PARTIALLY RESOLVES CORE-1156
**What**
Improve upsertWithReplace to batch as much as possible what can be batched. Performance of this method will be much greater specially for cases with maybe entities and batch (e.g we seen many cases where they bulk product with hundreds variants and options etc)
for example let take the following object:
- entity 1
- entity 2 []
- entity 3 []
- entity 2 []
- entity 3 []
here all entity 3 will be batched and all entity 2 will be batched
I ve also added a pretty detail test that check all the stage and what is batched or not with many comments so that it is less harder to consume and remember in the future
Also includes:
- mikro orm upgade (issues found and fixes)
- order module hooks fixes
**NOTE**
It was easier for now to do this instead of rewriting the different areas where it is being used, also, maybe it means that we will have closer performance to what we would expect to have natively
**NOTE 2**
Also fix the fact that integration tests of the core packages never ran 😂
* chore(): Upgrade mikro orm
* handle 'null' value for big number props
* 6.5.2
* remove only
* fix pricing module rule value
* switch select in strategy for balances
* revert to select in strategy for order module
* fix defining DML ManyToOne
* fix define relationship
* test fix
* more fixes
* change order strategy to balanced
* change order strategy to balanced
* prevent unnecessary manager fork
* revert generated www changes
* remove unnecessary changes
* Create real-cobras-deny.md
* address feedback
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* feat: Add draft order plugin
* version draft order plugin
* update readme
* chore: Update scripts
* Create purple-dolls-cheer.md
* port over latest changes
* chore: Make package public
* feat: add view_configurations feature flag
- Add feature flag provider and hooks to admin dashboard
- Add backend API endpoint for feature flags
- Create view_configurations feature flag (disabled by default)
- Update order list table to use legacy version when flag is disabled
- Can be enabled with MEDUSA_FF_VIEW_CONFIGURATIONS=true env var
* fix: naming
* fix: feature flags unauthenticated
* fix: add test
* feat: add settings module
* fix: deps
* fix: cleanup
* fix: add more tetsts
* fix: rm changelog
* fix: deps
* fix: add settings module to default modules list
* fix: pr comments
* fix: deps,build
* fix: alias
* fix: tests
* fix: update snapshots
**What**
Fixed a bug in the prepareListQuery function where nested field ordering was not properly building the expected nested object structure. The function was returning flat objects like { "employee.first_name": "ASC" } instead of the correct nested structure { "employee": { "first_name": "ASC" } }.
**Why**
The buildOrder function is designed to create nested objects from dot-notation field paths, which is essential for proper query building in the Medusa framework. When this functionality was broken, it prevented correct ordering of related fields and caused queries to fail or return unexpected results.
**How**
- Root cause: The `prepareListQuery` function was not properly utilizing the `buildOrder` utility function to transform dot-notation field paths into nested objects
- Before: order = "employee.first_name" → { "employee.first_name": "ASC" }
- After: order = "employee.first_name" → { "employee": { "first_name": "ASC" } }
- Added comprehensive tests: Created detailed unit tests for the prepareListQuery function focusing on buildOrder functionality, covering various scenarios including:
- Simple ascending/descending order
- Nested field ordering (e.g., product.title)
- Deeply nested ordering (e.g., product.variants.prices.amount)
- Multiple nesting levels (up to 5 levels deep)
- Added integration tests: Created integration tests in `product.spec.ts` to verify the full end-to-end functionality of nested ordering with variant titles
The fix ensures that the buildOrder function properly transforms dot-notation field paths into the expected nested object structure, enabling correct query building for related field ordering throughout the Medusa framework.
Resolves SUP-1868
Glob 7 uses the `inflight` module, which leaks memory. Also, all other Medusa packages are using glob 10+. So upgraded the one used by the framework too.
Fixes: FRMW-2972
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>