* chore(): Upgrade mikro orm
* handle 'null' value for big number props
* 6.5.2
* remove only
* fix pricing module rule value
* switch select in strategy for balances
* revert to select in strategy for order module
* fix defining DML ManyToOne
* fix define relationship
* test fix
* more fixes
* change order strategy to balanced
* change order strategy to balanced
* prevent unnecessary manager fork
* revert generated www changes
* remove unnecessary changes
* Create real-cobras-deny.md
* address feedback
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* feat: Add draft order plugin
* version draft order plugin
* update readme
* chore: Update scripts
* Create purple-dolls-cheer.md
* port over latest changes
* chore: Make package public
* feat: add view_configurations feature flag
- Add feature flag provider and hooks to admin dashboard
- Add backend API endpoint for feature flags
- Create view_configurations feature flag (disabled by default)
- Update order list table to use legacy version when flag is disabled
- Can be enabled with MEDUSA_FF_VIEW_CONFIGURATIONS=true env var
* fix: naming
* fix: feature flags unauthenticated
* fix: add test
* feat: add settings module
* fix: deps
* fix: cleanup
* fix: add more tetsts
* fix: rm changelog
* fix: deps
* fix: add settings module to default modules list
* fix: pr comments
* fix: deps,build
* fix: alias
* fix: tests
* fix: update snapshots
**What**
Fixed a bug in the prepareListQuery function where nested field ordering was not properly building the expected nested object structure. The function was returning flat objects like { "employee.first_name": "ASC" } instead of the correct nested structure { "employee": { "first_name": "ASC" } }.
**Why**
The buildOrder function is designed to create nested objects from dot-notation field paths, which is essential for proper query building in the Medusa framework. When this functionality was broken, it prevented correct ordering of related fields and caused queries to fail or return unexpected results.
**How**
- Root cause: The `prepareListQuery` function was not properly utilizing the `buildOrder` utility function to transform dot-notation field paths into nested objects
- Before: order = "employee.first_name" → { "employee.first_name": "ASC" }
- After: order = "employee.first_name" → { "employee": { "first_name": "ASC" } }
- Added comprehensive tests: Created detailed unit tests for the prepareListQuery function focusing on buildOrder functionality, covering various scenarios including:
- Simple ascending/descending order
- Nested field ordering (e.g., product.title)
- Deeply nested ordering (e.g., product.variants.prices.amount)
- Multiple nesting levels (up to 5 levels deep)
- Added integration tests: Created integration tests in `product.spec.ts` to verify the full end-to-end functionality of nested ordering with variant titles
The fix ensures that the buildOrder function properly transforms dot-notation field paths into the expected nested object structure, enabling correct query building for related field ordering throughout the Medusa framework.
Resolves SUP-1868
Glob 7 uses the `inflight` module, which leaks memory. Also, all other Medusa packages are using glob 10+. So upgraded the one used by the framework too.
Fixes: FRMW-2972
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Since the runtime of the `@medusajs/analytics-posthog` relies on `posthog-node` package. It should be either installed as a dependency or a peerDependency that will be satisfied by the user project.
In this PR, I have added it as a peer dependency
* feat: Add an analytics module and local and posthog providers
* fix: Add tests and wire up in missing places
* fix: Address feedback and add missing module typing
* fix: Address feedback and add missing module typing
---------
Co-authored-by: Adrien de Peretti <adrien.deperetti@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
Fixes: FRMW-2965
In this PR we replace/remove the existing step to normalize a CSV file with the newly written CSV normalizer and also we validate the file contents further using a Zod schema.
I have duplicated the schema for now. But it is makes sense to re-use the schema for CSV validating and `/admin/products/batch`, then I can keep one source of truth under utils and re-export it. WDYT?
**Screenshots of some errors after validating the file strictly**


**What**
- Reworks how admin extensions are loaded from plugins.
- Reworks how extensions are managed internally in the dashboard project.
**Why**
- Previously we loaded extensions from plugins the same way we do for extension found in a users application. This being scanning the source code for possible extensions in `.medusa/server/src/admin`, and including any extensions that were discovered in the final virtual modules.
- This was causing issues with how Vite optimizes dependencies, and would lead to CJS/ESM issues. Not sure of the exact cause of this, but the issue was pinpointed to Vite not being able to register correctly which dependencies to optimize when they were loaded through the virtual module from a plugin in `node_modules`.
**What changed**
- To circumvent the above issue we have changed to a different strategy for loading extensions from plugins. The changes are the following:
- We now build plugins slightly different, if a plugin has admin extensions we now build those to `.medusa/server/src/admin/index.mjs` and `.medusa/server/src/admin/index.js` for a ESM and CJS build.
- When determining how to load extensions from a source we follow these rules:
- If the source has a `medusa-plugin-options.json` or is the root application we determine that it is a `local` extension source, and load extensions as previously through a virtual module.
- If it has neither of the above, but has a `./admin` export in its package.json then we determine that it is a `package` extension, and we update the entry point for the dashboard to import the package and pass its extensions a long to the dashboard manager.
**Changes required by plugin authors**
- The change has no breaking changes, but requires plugin authors to update the `package.json` of their plugins to also include a `./admin` export. It should look like this:
```json
{
"name": "@medusajs/plugin",
"version": "0.0.1",
"description": "A starter for Medusa plugins.",
"author": "Medusa (https://medusajs.com)",
"license": "MIT",
"files": [
".medusa/server"
],
"exports": {
"./package.json": "./package.json",
"./workflows": "./.medusa/server/src/workflows/index.js",
"./.medusa/server/src/modules/*": "./.medusa/server/src/modules/*/index.js",
"./modules/*": "./.medusa/server/src/modules/*/index.js",
"./providers/*": "./.medusa/server/src/providers/*/index.js",
"./*": "./.medusa/server/src/*.js",
"./admin": {
"import": "./.medusa/server/src/admin/index.mjs",
"require": "./.medusa/server/src/admin/index.js",
"default": "./.medusa/server/src/admin/index.js"
}
},
}
```