**What**
Jest is patching the event emitter meaning that sometimes it can lead to flacky behaviors and block the test execution if the done callback is never reached. To prevent that from happening, the fail trap will call the done callback after a given time and warn that the test could not be concluded because of jest blocking it
**What**
Currently only cron pattern are supported by scheduled jobs, this can lead to issue. for example you set the pattern to execute every hours at minute 0 and second 0 (as it is expected to execute at exactly this constraint) but due to the moment it gets executed we our out of the second 0 then the job wont get executed until the next scheduled cron table execution.
With this pr we introduce the `interval` configuration which allows you the specify a delay between execution in ms (e.g every minute -> 60 * 1000 ms) which ensure that once a job is executed another one is scheduled for a minute later.
**Usage**
```ts
// jobs/job-1.ts
const thirtySeconds = 30 * 1000
export const config = {
name: "job-1",
schedule: {
interval: thirtySeconds
},
}
```
* fix: preserve payment sessions during certain Stripe errors for webhook reconciliation
fix: add retry mechanism for errors that might be fixed after retry
fix: authorizePaymentSession method will update payment_session.status regardless regardless of wether or not the authorization is successful
* Refactor: improve handling structure and syntax
-Move HandledErrorType definition to the top of stripe-base
- Use timers/promises for setTimeout
- Removed data in HandledErrorType when retry is true
* refactor: improve error handling flow and logic
- Simplify return statement in initiatePayment to handle null cases
- Remove redundant if-check in handleStripeError and rely on switch
- Reorder conditional checks in executeWithRetry for clearer flow
- Update executeWithRetry to check for retry=false condition first
* clean up
* fix: improve payment error handling and traceability
- Return structured error state for StripeAPIError instead of null
- Throw error when retries are exhausted and no payment intent exists
- Update type definitions to support error state tracking
* fix formatting and naming
What:
* Old deployments have repeatable jobs registered in a wrong queue. When the `server` instance picks that job, the workflow doesn't exist, it calls to remove the job, which then removes the job from the new queue.
* This PR cleans up any repeatable job from the queue that is exclusive to handle workflows.
**What**
Currently, the workflow engine redis does not make any distinction between worker modes, when starting as server, the engine listen to the queue which contains everything and try to execute the corresponding workflow which does not exists since job workflows are not loaded in server mode. Now, a dedicated queue is created for jobs and the worker is only started if the instance is not in server mode. In order to clean up the old queue, if the old queue trigger a scheduled job then it gets removed from the queue since it will get re added to the new queue by the new worker instances
**What**
Currently, we are potentially providing an array of selector/data leading to fetching data sequentially before running on update which will fetch data again in batch and perform the update. Now we can pass the data directly which includes the id already and only perform one bulk fetch + one bulk update.
This pr also include a fix on the inventory validation, currently, only the item to update inventory is being checked, with this pr we also check the inventory for the items that needs to be created
**What**
- Prevent event release when a workflow is run as step and finish
- Use `unlink` instead of `del` when removing keys from redist to push the execution to async thread
**What**
- Fix soft deletion and restoration emitted events
- Improve soft deleted/restore algorithm
- Fix big number field handling null value during partial hydration from mikro orm
**What**
Some steps were calling the modules even when nothing was needed which for some operations would create transaction for nothing leading to extra execution time that add up very quickly on cloud network
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
what:
- scopes uniqueness index to only non deleted records
- explicit sorting of buy get promotions
- This error popped up as we removed the uniqueness constraint which seems to have kept a specific order.
RESOLVES https://github.com/medusajs/medusa/issues/11606
**What**
- Move create product to use native create by structuring the data appropriately, it means no more `upsertWithReplace` being very poorly performant and got 20x better performances on staging
- Improvements in `upsertWithReplace` to still get performance boost for places that still relies on it. Mostly bulking the operations when possible
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
* fix(): Allow to query deleted records from graph API
* fix(): Allow to query deleted records from graph API
* handle empty q value
* update staled at sync
* rename integration tests file
* Create strong-houses-marry.md
* try to fix flacky tests
* fix pricing context
* update changeset
* update changeset
* fix import
* skip test for now
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
- Add index engine feature flag
- apply it to the `store/products` end point as well as `admin/products`
- Query builder various fixes
- search capabilities on full data of every entities. The `q` search will be applied to all involved joined table for selection/where clauses
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
What:
- `query.index` helper. It queries the index module, and aggregate the rest of requested fields/relations if needed like `query.graph`.
Not covered in this PR:
- Hydrate only sub entities returned by the query. Example: 1 out of 5 variants have returned, it should only hydrate the data of the single entity, currently it will merge all the variants of the product.
- Generate types of indexed data
example:
```ts
const query = container.resolve(ContainerRegistrationKeys.QUERY)
await query.index({
entity: "product",
fields: [
"id",
"description",
"status",
"variants.sku",
"variants.barcode",
"variants.material",
"variants.options.value",
"variants.prices.amount",
"variants.prices.currency_code",
"variants.inventory_items.inventory.sku",
"variants.inventory_items.inventory.description",
],
filters: {
"variants.sku": { $like: "%-1" },
"variants.prices.amount": { $gt: 30 },
},
pagination: {
order: {
"variants.prices.amount": "DESC",
},
},
})
```
This query return all products where at least one variant has the title ending in `-1` and at least one price bigger than `30`.
The Index Module only hold the data used to paginate and filter, and the returned object is:
```json
{
"id": "prod_01JKEAM2GJZ14K64R0DHK0JE72",
"title": null,
"variants": [
{
"id": "variant_01JKEAM2HC89GWS95F6GF9C6YA",
"sku": "extra-variant-1",
"prices": [
{
"id": "price_01JKEAM2JADEWWX72F8QDP6QXT",
"amount": 80,
"currency_code": "USD"
}
]
}
]
}
```
All the rest of the fields will be hydrated from their respective modules, and the final result will be:
```json
{
"id": "prod_01JKEAY2RJTF8TW9A23KTGY1GD",
"description": "extra description",
"status": "draft",
"variants": [
{
"sku": "extra-variant-1",
"barcode": null,
"material": null,
"id": "variant_01JKEAY2S945CRZ6X4QZJ7GVBJ",
"options": [
{
"value": "Red"
}
],
"prices": [
{
"amount": 20,
"currency_code": "CAD",
"id": "price_01JKEAY2T2EEYSWZHPGG11B7W7"
},
{
"amount": 80,
"currency_code": "USD",
"id": "price_01JKEAY2T2NJK2E5468RK84CAR"
}
],
"inventory_items": [
{
"variant_id": "variant_01JKEAY2S945CRZ6X4QZJ7GVBJ",
"inventory_item_id": "iitem_01JKEAY2SNY2AWEHPZN0DDXVW6",
"inventory": {
"sku": "extra-variant-1",
"description": "extra variant 1",
"id": "iitem_01JKEAY2SNY2AWEHPZN0DDXVW6"
}
}
]
}
]
}
```
Co-authored-by: Adrien de Peretti <25098370+adrien2p@users.noreply.github.com>
**What**
Fix linkable generation when there is no dml models and models are provided as virtual to the joiner config and therefore the linkable are inferred
This PR adds a couple new statuses to the payment collection and payment webhook results. The payment collection will now be marked as "completed" once the captured amount is the full amount of the payment collection.
There are several things left to improve the payment setup, so non-happy-path cases are handled correctly.
1. Currently the payment session and payment models serve a very similar purpose. Part of the information is found on one, and the other part on the other model, without any clear reason for doing so. We can simplify the payment module and the data models simply by merging the two.
2. We need to handle failures more gracefully, such as setting the payment session status to failed when such a webhook comes in.
3. We should convert the payment collection status and the different amounts to calculated fields from the payment session, captures, and refunds, as they can easily be a source of inconsistencies.
Closes: FRMW-2892, FRMW-2893
**What**
Wired up the building block that we merged previously in order to manage data synchronization. The flow is as follow
- On application start
- Build schema object representation from configuration
- Check configuration changes
- if new entities configured
- Data synchronizer initialize orchestrator and start sync
- for each entity
- acquire lock
- mark existing data as staled
- sync all data by batch
- marked them not staled anymore
- acknowledge each processed batch and renew lock
- update metadata with last synced cursor for entity X
- release lock
- remove all remaining staled data
- if any entities removed from last configuration
- remove the index data and relations
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Add missing data model name in `model.define` in the Order Module where missing, as we need to infer the names in the references we generate
Closes DX-1305