* fix(medusa): sales_channel_id middleware manipulation leading to lost of the sc
* fix(medusa): sales_channel_id middleware manipulation leading to lost of the sc
* add unit tests
* add unit tests
* improve
* integration tests
* fix(core-flows): Add more fulfillment data
* fix(core-flows): Add more fulfillment data
* add more fields
---------
Co-authored-by: Riqwan Thamir <rmthamir@gmail.com>
* feat: Add support for uploading a file directly to the file provider from the client
* fix: Add missing types and add a couple of module tests
* fix: Allow nested routes, add test for it
What:
- Added step config `skipOnPermanentFailure`. Skip all the next steps when the current step fails. If a string is used, the workflow will resume from the given step.
- Fix `continueOnPermanentFailure` to continue the execution of the flow when a step fails.
```ts
createWorkflow("some-workflow", () => {
errorStep().config({
skipOnPermanentFailure: true,
})
nextStep1() // skipped
nextStep2() // skipped
})
createWorkflow("some-workflow", () => {
errorStep().config({
skipOnPermanentFailure: "resume-from-here",
});
nextStep1(); // skipped
nextStep2(); // skipped
nextStep3().config({ name: "resume-from-here" }); // executed
nextStep4(); // executed
});
```
**What**
Now that all events management are fixed in the workflows life cycle, the run as step needs to leverage the workflow engine if present (which should always be the case for async workflows) in order to ensure the continuation and the ability to mark parent step in parent workflow as success or failure
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
**What**
- custom setting routes were registered under `/settings/settings/custom-route` instead of `/settings/custom-route`
---
CLOSES SUP-1384
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
The `application_method_type` filter has a `string` type in the HTTP types. This PR accurately sets the type. This is useful for the generated OAS to show the possible filter values.
**What**
We found out that the pricing context from the cart always contains the entire cart, even though it is kind of wrong. The issue is that even though we improve the performances of the query, it will cost a lot to have hundreds of constraint for nothing potentially. For that reason, we cache the attributes in memory with the best possible query we can do to gather them and we renew them when we perform a calculate prices if it has been reset. That way, we ensure we don't have unnecessary checks on attributes that does not have rules.
Since we don't have the type table anymore which was doing that for us and until we have a proper caching layer it would do IMO. But the rules type table was very useful for these attributes findings
**What**
Reduce database queries when possible and use proper data structure and aggregation when possible in order to reduce performance decrease overall
**What**
I have removed the check for the context key where it was fetching all attributes available and then stripping out the one that does not exists.. On big dataset these would remove multiple hundreds of ms of query execution
**What**
Currently the util await for event infinitely, this can lead to chain crashes in the jest tests suites leading to too much noise to investigate proper issues.
We now have a default time out raced against the promise that is configurable to prevent from waiting for an excessive amount of time
* fix(workflow-engine-*): Prevent passing shared context reference
* fix(workflow-engine-*): Prevent passing shared context reference
* prevent tests from hanging
* fix event handling
* add integration tests
* use interval for scheduled in tests
* skip tests for now
* Create silent-glasses-enjoy.md
* fix cancel
* changeset
* push multiple aliases
* test multiple field alias
* increase wait time to index on test
---------
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
- Use the resource id filtering when possible instead of relying on programmatic intersection checks over potential hundreds thousands of resources from the link when in fact it is not necessary to fetch everything to check in memory but instead check in the db
- Also fix normalizeDataForContext middlewares
**What**
First iteration to prevent events from overwhelming the systems.
- Group emitted event ids when possible instead of creating a message per id which leads to reduced amount of events to process massively in cases of import for example
- Update the index engine to process event data in batches of 100
- Update event handling by the index engine to be able to upsert by batch as well
- Fix index engine build config for intermediate listeners inferrence