**What**
Fixed CSV import functionality to properly handle columns that were previously cuasing import errors.
**Why**
Users were encountering "Invalid column name(s)" errors when importing CSV files containing system-generated columns like "Product Created At", "Product Updated At", etc. These columns are automatically added by export templates but should be ignored during import since they're not part of the product creation schema.
Resolves FRMW-2983
**What**
Currently, filtering data providing a `deleted_at` value will automatically apply the `withDeleted` flag which in turns remove the default constraint apply to all queries `deleted_at: null`. The problem is that it does not account for the value assign to `deleted_at` leading to inconsistent behaviour depending on the value. e.g filtering with `deleted_at: { $eq: null }` where the expectation is to only filter the non deleted record will end up returning deleted record as well by applying the `withDeleted` filters.
This pr revert this auto detection if favor of the user providing `withDeleted` explicitly, as it is already supported , plus the filters.
Further more, some integration tests demonstrate how to filter deleted records (e.g product) from the api. While the api did not properly support it, this pr adds support to pass with_deleted flags to the query and being handled accordingly to our api support. Validators have been updated and product list end point benefit from it. Also, the list config type was already accepting such value which I have translated to the remote query config.
Also, since the previous pr was adjusting the product types, I ve adjusted them to match the expectation
**What**
Fixed a bug in the prepareListQuery function where nested field ordering was not properly building the expected nested object structure. The function was returning flat objects like { "employee.first_name": "ASC" } instead of the correct nested structure { "employee": { "first_name": "ASC" } }.
**Why**
The buildOrder function is designed to create nested objects from dot-notation field paths, which is essential for proper query building in the Medusa framework. When this functionality was broken, it prevented correct ordering of related fields and caused queries to fail or return unexpected results.
**How**
- Root cause: The `prepareListQuery` function was not properly utilizing the `buildOrder` utility function to transform dot-notation field paths into nested objects
- Before: order = "employee.first_name" → { "employee.first_name": "ASC" }
- After: order = "employee.first_name" → { "employee": { "first_name": "ASC" } }
- Added comprehensive tests: Created detailed unit tests for the prepareListQuery function focusing on buildOrder functionality, covering various scenarios including:
- Simple ascending/descending order
- Nested field ordering (e.g., product.title)
- Deeply nested ordering (e.g., product.variants.prices.amount)
- Multiple nesting levels (up to 5 levels deep)
- Added integration tests: Created integration tests in `product.spec.ts` to verify the full end-to-end functionality of nested ordering with variant titles
The fix ensures that the buildOrder function properly transforms dot-notation field paths into the expected nested object structure, enabling correct query building for related field ordering throughout the Medusa framework.
Resolves SUP-1868
RESOLVES FRMW-2978
**What**
Add retry mechanism to database connection management to prevent failing when the server start faster than what makes the connection available
FIXES SUP-1824
**What**
The medusa internal service update should always return the data in the expected shape described by the interface. The medusa service should not have to handle the reshapre
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Fixes: FRMW-2974
Currently during the product imports, we create multiple chunks that must be deleted after the import has finished (either successfully or with an error). Deleting files one by one leads to multiple network calls and slows down everything.
The `bulkDelete` method deletes multiple files (with their fileKey) in one go
* feat: Add an analytics module and local and posthog providers
* fix: Add tests and wire up in missing places
* fix: Address feedback and add missing module typing
* fix: Address feedback and add missing module typing
---------
Co-authored-by: Adrien de Peretti <adrien.deperetti@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
Fixes: FRMW-2965
In this PR we replace/remove the existing step to normalize a CSV file with the newly written CSV normalizer and also we validate the file contents further using a Zod schema.
I have duplicated the schema for now. But it is makes sense to re-use the schema for CSV validating and `/admin/products/batch`, then I can keep one source of truth under utils and re-export it. WDYT?
**Screenshots of some errors after validating the file strictly**


* add failing test for upsertWithReplace order
* reproduce prices update shuffling issue
* fix: fix order of returned updates in updateMany
* fix: fix order of returned updates in ProductService
* fix: reset test count to 1
* Create tame-insects-marry.md
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* fix: Plugin admin folder loading with backslash on Windows
* fix: Plugin admin folder loading with backslash on Windows - Add changeset
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* feat: implement direct upload
* feat: add direct-upload endpoint
* refactor: implement feedback
* refactor: have a dedicated endpoint for direct uploads
* refactor: convert responses to snakecase
* refactor: rename method to createImport
* test: add tests for the presigned-urls endpoint