* feat(index): add filterable fields to link definition
* rm comment
* break recursion
* validate read only links
* validate filterable
* gql schema array
* link parents
* isInverse
* push id when not present
* Fix ciruclar relationships and add tests to ensure proper behaviour (part 1)
* log and fallback to entity.alias
* cleanup and fixes
* cleanup and fixes
* cleanup and fixes
* fix get attributes
* gql type
* unit test
* array inference
* rm only
* package.json
* pacvkage.json
* fix link retrieval on duplicated entity type and aliases + tests
* link parents as array
* Match only parent entity
* rm comment
* remove hard coded schema
* extend types
* unit test
* test
* types
* pagination type
* type
* fix integration tests
* Improve performance of in selection
* use @@ to filter property
* escape jsonPath
* add Event Bus by default
* changeset
* rm postgres analyze
* estimate count
* new query
* parent aliases
* inner query w/ filter and sort relations
* address comments
---------
Co-authored-by: adrien2p <adrien.deperetti@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* fix: return fulfillment creation
* chore: changeset
* fix: link
* fix: cancel claim flow
* chore: test fulfillment creation as a part of return lifecycle test
* fix: exchanges cancle flow
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* feat: Add support for uploading a file directly to the file provider from the client
* fix: Add missing types and add a couple of module tests
* fix: Allow nested routes, add test for it
**What**
Now that all events management are fixed in the workflows life cycle, the run as step needs to leverage the workflow engine if present (which should always be the case for async workflows) in order to ensure the continuation and the ability to mark parent step in parent workflow as success or failure
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
**What**
We found out that the pricing context from the cart always contains the entire cart, even though it is kind of wrong. The issue is that even though we improve the performances of the query, it will cost a lot to have hundreds of constraint for nothing potentially. For that reason, we cache the attributes in memory with the best possible query we can do to gather them and we renew them when we perform a calculate prices if it has been reset. That way, we ensure we don't have unnecessary checks on attributes that does not have rules.
Since we don't have the type table anymore which was doing that for us and until we have a proper caching layer it would do IMO. But the rules type table was very useful for these attributes findings
**What**
Reduce database queries when possible and use proper data structure and aggregation when possible in order to reduce performance decrease overall
**What**
I have removed the check for the context key where it was fetching all attributes available and then stripping out the one that does not exists.. On big dataset these would remove multiple hundreds of ms of query execution
* fix(workflow-engine-*): Prevent passing shared context reference
* fix(workflow-engine-*): Prevent passing shared context reference
* prevent tests from hanging
* fix event handling
* add integration tests
* use interval for scheduled in tests
* skip tests for now
* Create silent-glasses-enjoy.md
* fix cancel
* changeset
* push multiple aliases
* test multiple field alias
* increase wait time to index on test
---------
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
First iteration to prevent events from overwhelming the systems.
- Group emitted event ids when possible instead of creating a message per id which leads to reduced amount of events to process massively in cases of import for example
- Update the index engine to process event data in batches of 100
- Update event handling by the index engine to be able to upsert by batch as well
- Fix index engine build config for intermediate listeners inferrence
What:
* Add `hasMany` for links that accepted that in the past. That way we don't block marketplaces to link 1 cart to multiple orders and carts with split payment.
* feat: add createMultiple flag to enforce inApp link uniqueness
* changes
* mocks
* default
* many to many
---------
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
- use the return received total to compute pending difference with return requested total
**Why**
- this would show incorrect outstanding amount when receiving a return and also, the outstanding amount after the return is received would be incorrect
**What**
Jest is patching the event emitter meaning that sometimes it can lead to flacky behaviors and block the test execution if the done callback is never reached. To prevent that from happening, the fail trap will call the done callback after a given time and warn that the test could not be concluded because of jest blocking it
**What**
Currently only cron pattern are supported by scheduled jobs, this can lead to issue. for example you set the pattern to execute every hours at minute 0 and second 0 (as it is expected to execute at exactly this constraint) but due to the moment it gets executed we our out of the second 0 then the job wont get executed until the next scheduled cron table execution.
With this pr we introduce the `interval` configuration which allows you the specify a delay between execution in ms (e.g every minute -> 60 * 1000 ms) which ensure that once a job is executed another one is scheduled for a minute later.
**Usage**
```ts
// jobs/job-1.ts
const thirtySeconds = 30 * 1000
export const config = {
name: "job-1",
schedule: {
interval: thirtySeconds
},
}
```
* fix: preserve payment sessions during certain Stripe errors for webhook reconciliation
fix: add retry mechanism for errors that might be fixed after retry
fix: authorizePaymentSession method will update payment_session.status regardless regardless of wether or not the authorization is successful
* Refactor: improve handling structure and syntax
-Move HandledErrorType definition to the top of stripe-base
- Use timers/promises for setTimeout
- Removed data in HandledErrorType when retry is true
* refactor: improve error handling flow and logic
- Simplify return statement in initiatePayment to handle null cases
- Remove redundant if-check in handleStripeError and rely on switch
- Reorder conditional checks in executeWithRetry for clearer flow
- Update executeWithRetry to check for retry=false condition first
* clean up
* fix: improve payment error handling and traceability
- Return structured error state for StripeAPIError instead of null
- Throw error when retries are exhausted and no payment intent exists
- Update type definitions to support error state tracking
* fix formatting and naming
What:
* Old deployments have repeatable jobs registered in a wrong queue. When the `server` instance picks that job, the workflow doesn't exist, it calls to remove the job, which then removes the job from the new queue.
* This PR cleans up any repeatable job from the queue that is exclusive to handle workflows.
**What**
Currently, the workflow engine redis does not make any distinction between worker modes, when starting as server, the engine listen to the queue which contains everything and try to execute the corresponding workflow which does not exists since job workflows are not loaded in server mode. Now, a dedicated queue is created for jobs and the worker is only started if the instance is not in server mode. In order to clean up the old queue, if the old queue trigger a scheduled job then it gets removed from the queue since it will get re added to the new queue by the new worker instances