- fix: make Metadata Variant and Product Variant static columns use the new function, so they don't fail later with the create / update product zod schema
- chore: add tests for processAsJson fucntion
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
** What
- Allow auto-loaded Medusa files to export a config object.
- Currently supports isDisabled to control loading.
- new instance `FeatureFlag` exported by `@medusajs/framework/utils`
- `feature-flags` is now a supported folder for medusa projects, modules, providers and plugins. They will be loaded and added to `FeatureFlag`
** Why
- Enables conditional loading of routes, migrations, jobs, subscribers, workflows, and other files based on feature flags.
```ts
// /src/feature-flags
import { FlagSettings } from "@medusajs/framework/feature-flags"
const CustomFeatureFlag: FlagSettings = {
key: "custom_feature",
default_val: false,
env_key: "FF_MY_CUSTOM_FEATURE",
description: "Enable xyz",
}
export default CustomFeatureFlag
```
```ts
// /src/modules/my-custom-module/migration/Migration20250822135845.ts
import { FeatureFlag } from "@medusajs/framework/utils"
export class Migration20250822135845 extends Migration {
override async up(){ }
override async down(){ }
}
defineFileConfig({
isDisabled: () => !FeatureFlag.isFeatureEnabled("custom_feature")
})
```
* fix(utils): fix promotion case of each allocation not applying its amount
* chore: fixed tests
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* feat: add view_configurations feature flag
- Add feature flag provider and hooks to admin dashboard
- Add backend API endpoint for feature flags
- Create view_configurations feature flag (disabled by default)
- Update order list table to use legacy version when flag is disabled
- Can be enabled with MEDUSA_FF_VIEW_CONFIGURATIONS=true env var
* fix: naming
* fix: feature flags unauthenticated
* fix: add test
* feat: add settings module
* fix: deps
* fix: cleanup
* fix: add more tetsts
* fix: rm changelog
* fix: deps
* fix: add settings module to default modules list
* fix: pr comments
* fix: deps,build
* fix: alias
* fix: tests
* fix: update snapshots
* chore(types, api): support shipping option type api endpoints
* core flows
* api
* typos
* compiler errors
* integration tests
* remove metadata
* changeset
* modify test
* upsert
* change remote query
* minor to patch
* description optional
* description optional
* woops my bad
* my bad again
---------
Co-authored-by: william bouchard <williambouchard@williams-MacBook-Pro.local>
* Move from total to original_total to resolve edge case in adjustment calculation
* Added changeset
* Added test case for correction
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* This fixes the discount_ calculation logic
* This fixes the adjustment to be handled as a subtotal value in every calculation and applies the tax inclusive logic on the promotion value itself
* Added some testcases and revoked some changes to improve testing output
* Fixed a test case based on feedback
* Corrected promotion/admin test cases
* Corrected cart/store test case
* Improved cart/store test cases for more robust promotion testing considering tax inclusion flags
* Remove unnessary changes as adjustments now automatically are subtotals and therefore the tax inclusive flag does not need to be applied again
* Remove adjustments->is_tax_inclusive usage everywhere
* Migration script to remove is_tax_inclusive in cart line item adjustment
* Forgot to adjust one more testcase
* Corrections based on fPolic feedback
* Refactored PR to consider feedback from oliver
* Added more testcases for promotion in cart
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
Fixed CSV import functionality to properly handle columns that were previously cuasing import errors.
**Why**
Users were encountering "Invalid column name(s)" errors when importing CSV files containing system-generated columns like "Product Created At", "Product Updated At", etc. These columns are automatically added by export templates but should be ignored during import since they're not part of the product creation schema.
Resolves FRMW-2983
**What**
Currently, filtering data providing a `deleted_at` value will automatically apply the `withDeleted` flag which in turns remove the default constraint apply to all queries `deleted_at: null`. The problem is that it does not account for the value assign to `deleted_at` leading to inconsistent behaviour depending on the value. e.g filtering with `deleted_at: { $eq: null }` where the expectation is to only filter the non deleted record will end up returning deleted record as well by applying the `withDeleted` filters.
This pr revert this auto detection if favor of the user providing `withDeleted` explicitly, as it is already supported , plus the filters.
Further more, some integration tests demonstrate how to filter deleted records (e.g product) from the api. While the api did not properly support it, this pr adds support to pass with_deleted flags to the query and being handled accordingly to our api support. Validators have been updated and product list end point benefit from it. Also, the list config type was already accepting such value which I have translated to the remote query config.
Also, since the previous pr was adjusting the product types, I ve adjusted them to match the expectation
**What**
Fixed a bug in the prepareListQuery function where nested field ordering was not properly building the expected nested object structure. The function was returning flat objects like { "employee.first_name": "ASC" } instead of the correct nested structure { "employee": { "first_name": "ASC" } }.
**Why**
The buildOrder function is designed to create nested objects from dot-notation field paths, which is essential for proper query building in the Medusa framework. When this functionality was broken, it prevented correct ordering of related fields and caused queries to fail or return unexpected results.
**How**
- Root cause: The `prepareListQuery` function was not properly utilizing the `buildOrder` utility function to transform dot-notation field paths into nested objects
- Before: order = "employee.first_name" → { "employee.first_name": "ASC" }
- After: order = "employee.first_name" → { "employee": { "first_name": "ASC" } }
- Added comprehensive tests: Created detailed unit tests for the prepareListQuery function focusing on buildOrder functionality, covering various scenarios including:
- Simple ascending/descending order
- Nested field ordering (e.g., product.title)
- Deeply nested ordering (e.g., product.variants.prices.amount)
- Multiple nesting levels (up to 5 levels deep)
- Added integration tests: Created integration tests in `product.spec.ts` to verify the full end-to-end functionality of nested ordering with variant titles
The fix ensures that the buildOrder function properly transforms dot-notation field paths into the expected nested object structure, enabling correct query building for related field ordering throughout the Medusa framework.
Resolves SUP-1868
RESOLVES FRMW-2978
**What**
Add retry mechanism to database connection management to prevent failing when the server start faster than what makes the connection available
FIXES SUP-1824
**What**
The medusa internal service update should always return the data in the expected shape described by the interface. The medusa service should not have to handle the reshapre
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Fixes: FRMW-2974
Currently during the product imports, we create multiple chunks that must be deleted after the import has finished (either successfully or with an error). Deleting files one by one leads to multiple network calls and slows down everything.
The `bulkDelete` method deletes multiple files (with their fileKey) in one go