FIXES SUP-1824
**What**
The medusa internal service update should always return the data in the expected shape described by the interface. The medusa service should not have to handle the reshapre
FIXES CLO-524
**What**
Add hidden stepDefinition object as part of the step argument and ensure the runAsStep handlers rely on the latest definition when config is being used on the returned step in order to ensure async configuration propagation and nested configuration
* [feat] add Indonesian json translation file
* [feat] include and export indonesian translation to index file
* [feat] export the indonesian language object to languages.ts
---------
Co-authored-by: luky <luzion1508@gmail.com>
Fixes: FRMW-2960
This PR adds support for processing large CSV files by breaking them into chunks and processing one chunk at a time. This is how it works in nutshell.
- The CSV file is read as a stream and each chunk of the stream is one CSV row.
- We read upto 1000 rows (plus a few more to ensure product variants of a product are not split into multiple chunks).
- Each chunk is then normalized using the `CSVNormalizer` and validated using zod schemas. If there is an error, the entire process will be aborted and the existing chunks will be deleted.
- Each chunk is written to a JSON file, so that we can process them later (after user confirms) without re-processing or validating the CSV file.
- The confirmation process will start consuming one chunk at a time and create/update products using the `batchProducts` workflow.
## Resume or not to resume processing of chunks
Let's imagine during processing of chunks, we find that chunk 3 leads to a database error. However, till this time we have processed the first two chunks already. How do we deal with this situation? Options are:
- We store at which chunk we failed and then during the re-upload we ignore chunks before the failed one. In my conversation with @olivermrbl we discovered that resuming will have to work with certain assumptions if we decide to implement it.
- What if a user updates the CSV rows which are part of the already processed chunks? These changes will be ignored and they will never notice it.
- Resuming works if the file name is still the same. What if they made changes and saved the file with "Save as - New name". In that case we will anyways process the entire file.
- We will have to fetch the old workflow from the workflow engine using some `ilike` search, so that we can see at which chunk the last run failed for the given file.
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Fixes: FRMW-2974
Currently during the product imports, we create multiple chunks that must be deleted after the import has finished (either successfully or with an error). Deleting files one by one leads to multiple network calls and slows down everything.
The `bulkDelete` method deletes multiple files (with their fileKey) in one go
**What**
- Adjust lock duration, it is in seconds and not in ms
- Log on lock release
- Renew lock separately from the other promises
- Add more logs
- Log when lock can't be acquired. It can be expected in case two processes try to sync the same entity and in that case it can be ignored. But at least it gives some information in case it happens for another reason
- Log when the release lock failed, the lock will remain in the locking provider for 1 minute before being removed. But it wont prevent other entities to be synced
The `auth.login` method of the JS SDK allows passing custom, which is useful for custom authentication providers. For example:
```ts
const response = await sdk.auth.login("customer", "phone-auth", {
phone
})
```
However, the `auth.register` method doesn't allow that, so we can't do the following:
```ts
const response = await sdk.auth.register("customer", "phone-auth", {
phone
})
```
Instead, we'd have to use the `client.fetch` method.
This PR fixes the input type of the payload passed to the `register` method to be similar to that of `login`, which would allow using it with custom authentication providers
Since the runtime of the `@medusajs/analytics-posthog` relies on `posthog-node` package. It should be either installed as a dependency or a peerDependency that will be satisfied by the user project.
In this PR, I have added it as a peer dependency
Fixes: FRMW-2968
In this PR we have done two major things.
- First, we remove storing CSV contents within the workflow storage and neither store the JSON payloads to be created/updated in workflows. Earlier, they all were workflow inputs, hence were stored in the workflow
- Introduce a naive concept of chunks and process chunks one by one. The next PR making chunking a bit more robust while using streams, adding ability to resume from the failed chunk and so on.
> [!IMPORTANT]
> The new endpoint `/admin/product/imports` is not in use yet. But it will be after the next (final) PR.
## Old context in workflow storage

## New context in workflow storage

* feat: Add an analytics module and local and posthog providers
* fix: Add tests and wire up in missing places
* fix: Address feedback and add missing module typing
* fix: Address feedback and add missing module typing
---------
Co-authored-by: Adrien de Peretti <adrien.deperetti@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>