**What**
We found out that the pricing context from the cart always contains the entire cart, even though it is kind of wrong. The issue is that even though we improve the performances of the query, it will cost a lot to have hundreds of constraint for nothing potentially. For that reason, we cache the attributes in memory with the best possible query we can do to gather them and we renew them when we perform a calculate prices if it has been reset. That way, we ensure we don't have unnecessary checks on attributes that does not have rules.
Since we don't have the type table anymore which was doing that for us and until we have a proper caching layer it would do IMO. But the rules type table was very useful for these attributes findings
**What**
Reduce database queries when possible and use proper data structure and aggregation when possible in order to reduce performance decrease overall
**What**
I have removed the check for the context key where it was fetching all attributes available and then stripping out the one that does not exists.. On big dataset these would remove multiple hundreds of ms of query execution
* fix(workflow-engine-*): Prevent passing shared context reference
* fix(workflow-engine-*): Prevent passing shared context reference
* prevent tests from hanging
* fix event handling
* add integration tests
* use interval for scheduled in tests
* skip tests for now
* Create silent-glasses-enjoy.md
* fix cancel
* changeset
* push multiple aliases
* test multiple field alias
* increase wait time to index on test
---------
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
First iteration to prevent events from overwhelming the systems.
- Group emitted event ids when possible instead of creating a message per id which leads to reduced amount of events to process massively in cases of import for example
- Update the index engine to process event data in batches of 100
- Update event handling by the index engine to be able to upsert by batch as well
- Fix index engine build config for intermediate listeners inferrence
What:
* Add `hasMany` for links that accepted that in the past. That way we don't block marketplaces to link 1 cart to multiple orders and carts with split payment.
* feat: add createMultiple flag to enforce inApp link uniqueness
* changes
* mocks
* default
* many to many
---------
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
- use the return received total to compute pending difference with return requested total
**Why**
- this would show incorrect outstanding amount when receiving a return and also, the outstanding amount after the return is received would be incorrect
**What**
Jest is patching the event emitter meaning that sometimes it can lead to flacky behaviors and block the test execution if the done callback is never reached. To prevent that from happening, the fail trap will call the done callback after a given time and warn that the test could not be concluded because of jest blocking it
**What**
Currently only cron pattern are supported by scheduled jobs, this can lead to issue. for example you set the pattern to execute every hours at minute 0 and second 0 (as it is expected to execute at exactly this constraint) but due to the moment it gets executed we our out of the second 0 then the job wont get executed until the next scheduled cron table execution.
With this pr we introduce the `interval` configuration which allows you the specify a delay between execution in ms (e.g every minute -> 60 * 1000 ms) which ensure that once a job is executed another one is scheduled for a minute later.
**Usage**
```ts
// jobs/job-1.ts
const thirtySeconds = 30 * 1000
export const config = {
name: "job-1",
schedule: {
interval: thirtySeconds
},
}
```
* fix: preserve payment sessions during certain Stripe errors for webhook reconciliation
fix: add retry mechanism for errors that might be fixed after retry
fix: authorizePaymentSession method will update payment_session.status regardless regardless of wether or not the authorization is successful
* Refactor: improve handling structure and syntax
-Move HandledErrorType definition to the top of stripe-base
- Use timers/promises for setTimeout
- Removed data in HandledErrorType when retry is true
* refactor: improve error handling flow and logic
- Simplify return statement in initiatePayment to handle null cases
- Remove redundant if-check in handleStripeError and rely on switch
- Reorder conditional checks in executeWithRetry for clearer flow
- Update executeWithRetry to check for retry=false condition first
* clean up
* fix: improve payment error handling and traceability
- Return structured error state for StripeAPIError instead of null
- Throw error when retries are exhausted and no payment intent exists
- Update type definitions to support error state tracking
* fix formatting and naming
What:
* Old deployments have repeatable jobs registered in a wrong queue. When the `server` instance picks that job, the workflow doesn't exist, it calls to remove the job, which then removes the job from the new queue.
* This PR cleans up any repeatable job from the queue that is exclusive to handle workflows.
**What**
Currently, the workflow engine redis does not make any distinction between worker modes, when starting as server, the engine listen to the queue which contains everything and try to execute the corresponding workflow which does not exists since job workflows are not loaded in server mode. Now, a dedicated queue is created for jobs and the worker is only started if the instance is not in server mode. In order to clean up the old queue, if the old queue trigger a scheduled job then it gets removed from the queue since it will get re added to the new queue by the new worker instances
**What**
Currently, we are potentially providing an array of selector/data leading to fetching data sequentially before running on update which will fetch data again in batch and perform the update. Now we can pass the data directly which includes the id already and only perform one bulk fetch + one bulk update.
This pr also include a fix on the inventory validation, currently, only the item to update inventory is being checked, with this pr we also check the inventory for the items that needs to be created
**What**
- Prevent event release when a workflow is run as step and finish
- Use `unlink` instead of `del` when removing keys from redist to push the execution to async thread
**What**
- Fix soft deletion and restoration emitted events
- Improve soft deleted/restore algorithm
- Fix big number field handling null value during partial hydration from mikro orm
**What**
Some steps were calling the modules even when nothing was needed which for some operations would create transaction for nothing leading to extra execution time that add up very quickly on cloud network
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
what:
- scopes uniqueness index to only non deleted records
- explicit sorting of buy get promotions
- This error popped up as we removed the uniqueness constraint which seems to have kept a specific order.
RESOLVES https://github.com/medusajs/medusa/issues/11606
**What**
- Move create product to use native create by structuring the data appropriately, it means no more `upsertWithReplace` being very poorly performant and got 20x better performances on staging
- Improvements in `upsertWithReplace` to still get performance boost for places that still relies on it. Mostly bulking the operations when possible
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
* fix(): Allow to query deleted records from graph API
* fix(): Allow to query deleted records from graph API
* handle empty q value
* update staled at sync
* rename integration tests file
* Create strong-houses-marry.md
* try to fix flacky tests
* fix pricing context
* update changeset
* update changeset
* fix import
* skip test for now
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>