* fix: handle missing variants and preserve zero unit_price in prepareLineItems
Fixed two issues in prepareLineItems:
1. unit_price value of 0 was previously treated as falsy and overwritten with the variant's price. Updated the check to only fallback when unitPrice is null or undefined.
2. When no variants array was provided, the code could throw due to non-null assertions. Now safely handles missing or empty variants.
Additional adjustments:
- Replaced `||` with `??` for array defaults to preserve valid empty arrays.
- Kept changes minimal to avoid altering function signature.
* Create sweet-wasps-build.md
* Update add-line-items.ts
* Update packages/core/core-flows/src/order/workflows/add-line-items.ts
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* Update packages/core/core-flows/src/order/workflows/create-order.ts
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
---------
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
This is part of a set of stacked PRs to add support for view configurations, which will allow users to customize the columns shown in admin tables.
The functionality in this PR is behind the view_configuration feature flag.
**What**
- Adds an API to introspect the remote query schema and extract columns that users can add to their table views.
**Notes**
- This is a brute forced approach to get the functionality in place and I expect it to evolve to something more elegant over time. Some ideas for things we might want to consider:
- Compute the entity columns during build time and store as static data the API can serve quickly.
- Offer developers more control over how their data can be exposed in this API with additional decorators in the DML.
* feat: add view_configurations feature flag
- Add feature flag provider and hooks to admin dashboard
- Add backend API endpoint for feature flags
- Create view_configurations feature flag (disabled by default)
- Update order list table to use legacy version when flag is disabled
- Can be enabled with MEDUSA_FF_VIEW_CONFIGURATIONS=true env var
* fix: naming
* fix: feature flags unauthenticated
* fix: add test
* feat: add settings module
* fix: deps
* fix: cleanup
* fix: add more tetsts
* fix: rm changelog
* fix: deps
* fix: add settings module to default modules list
* feat(api): add view configuration API routes
- Add CRUD endpoints for view configurations
- Add active view configuration management endpoints
- Add feature flag middleware for view config routes
- Add comprehensive integration tests
- Add HTTP types for view configuration payloads and responses
- Support system defaults and user-specific configurations
- Enable setting views as active during create/update operations
* fix: test
* fix: test
* fix: test
* fix: change view configuration path
* fix: tests
* fix: remove manual settings module config from integration tests
* fix: container typing
* fix: workflows
* test(draft-order): add draft order creation test with tax application by product type
* fix: add `product_type.id` and `items.product_type_id` fields to order utilities
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* chore(types, api): support shipping option type api endpoints
* core flows
* api
* typos
* compiler errors
* integration tests
* remove metadata
* changeset
* modify test
* upsert
* change remote query
* minor to patch
* description optional
* description optional
* woops my bad
* my bad again
---------
Co-authored-by: william bouchard <williambouchard@williams-MacBook-Pro.local>
* fix: createCustomerGroupsStep rollback delete created customer groups instead of customers
* Create serious-seahorses-switch.md
---------
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
* fix(core-flows); guest customer updates email to another guest account
* wip: refactor test
* fix: refactor step, add testing
* fix: update test
* fix: string assertion in a test
**What**
Fixed CSV import functionality to properly handle columns that were previously cuasing import errors.
**Why**
Users were encountering "Invalid column name(s)" errors when importing CSV files containing system-generated columns like "Product Created At", "Product Updated At", etc. These columns are automatically added by export templates but should be ignored during import since they're not part of the product creation schema.
Resolves FRMW-2983
**What**
- don't call `updateOrderTaxLinesWorkflow` when a shipping method is removed from a draft order (tax lines will be cascade deleted with the method)
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Fixes: FRMW-2968
In this PR we have done two major things.
- First, we remove storing CSV contents within the workflow storage and neither store the JSON payloads to be created/updated in workflows. Earlier, they all were workflow inputs, hence were stored in the workflow
- Introduce a naive concept of chunks and process chunks one by one. The next PR making chunking a bit more robust while using streams, adding ability to resume from the failed chunk and so on.
> [!IMPORTANT]
> The new endpoint `/admin/product/imports` is not in use yet. But it will be after the next (final) PR.
## Old context in workflow storage

## New context in workflow storage

Fixes: FRMW-2965
In this PR we replace/remove the existing step to normalize a CSV file with the newly written CSV normalizer and also we validate the file contents further using a Zod schema.
I have duplicated the schema for now. But it is makes sense to re-use the schema for CSV validating and `/admin/products/batch`, then I can keep one source of truth under utils and re-export it. WDYT?
**Screenshots of some errors after validating the file strictly**

