* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* Clarify test description and improve CI
* chore(): Upgrade mikro orm
* handle 'null' value for big number props
* 6.5.2
* remove only
* fix pricing module rule value
* switch select in strategy for balances
* revert to select in strategy for order module
* fix defining DML ManyToOne
* fix define relationship
* test fix
* more fixes
* change order strategy to balanced
* change order strategy to balanced
* prevent unnecessary manager fork
* revert generated www changes
* remove unnecessary changes
* Create real-cobras-deny.md
* address feedback
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
- Fix missing `ON DELETE CASCADE` constraint on order credit lines
- Fix `receiveReturn` miss usage
- Make all order integration tests to run and rename them all to `*.spec.ts`
- Fix package.json typo
**What**
- Adjust lock duration, it is in seconds and not in ms
- Log on lock release
- Renew lock separately from the other promises
- Add more logs
- Log when lock can't be acquired. It can be expected in case two processes try to sync the same entity and in that case it can be ignored. But at least it gives some information in case it happens for another reason
- Log when the release lock failed, the lock will remain in the locking provider for 1 minute before being removed. But it wont prevent other entities to be synced
**What**
Make sure there is no open handles left and that the shutdown function are properly called. Refactor and improve the medusa test runner. Make sure all modules instances are released and cleaned up
**NOTE:**
On a separate PR we can continue the investigation for the memory growing over time while the tests execute
* feat(index): add filterable fields to link definition
* rm comment
* break recursion
* validate read only links
* validate filterable
* gql schema array
* link parents
* isInverse
* push id when not present
* Fix ciruclar relationships and add tests to ensure proper behaviour (part 1)
* log and fallback to entity.alias
* cleanup and fixes
* cleanup and fixes
* cleanup and fixes
* fix get attributes
* gql type
* unit test
* array inference
* rm only
* package.json
* pacvkage.json
* fix link retrieval on duplicated entity type and aliases + tests
* link parents as array
* Match only parent entity
* rm comment
* remove hard coded schema
* extend types
* unit test
* test
* types
* pagination type
* type
* fix integration tests
* Improve performance of in selection
* use @@ to filter property
* escape jsonPath
* add Event Bus by default
* changeset
* rm postgres analyze
* estimate count
* new query
* parent aliases
* inner query w/ filter and sort relations
* address comments
---------
Co-authored-by: adrien2p <adrien.deperetti@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
First iteration to prevent events from overwhelming the systems.
- Group emitted event ids when possible instead of creating a message per id which leads to reduced amount of events to process massively in cases of import for example
- Update the index engine to process event data in batches of 100
- Update event handling by the index engine to be able to upsert by batch as well
- Fix index engine build config for intermediate listeners inferrence
* fix(): Allow to query deleted records from graph API
* fix(): Allow to query deleted records from graph API
* handle empty q value
* update staled at sync
* rename integration tests file
* Create strong-houses-marry.md
* try to fix flacky tests
* fix pricing context
* update changeset
* update changeset
* fix import
* skip test for now
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
- Add index engine feature flag
- apply it to the `store/products` end point as well as `admin/products`
- Query builder various fixes
- search capabilities on full data of every entities. The `q` search will be applied to all involved joined table for selection/where clauses
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
What:
- `query.index` helper. It queries the index module, and aggregate the rest of requested fields/relations if needed like `query.graph`.
Not covered in this PR:
- Hydrate only sub entities returned by the query. Example: 1 out of 5 variants have returned, it should only hydrate the data of the single entity, currently it will merge all the variants of the product.
- Generate types of indexed data
example:
```ts
const query = container.resolve(ContainerRegistrationKeys.QUERY)
await query.index({
entity: "product",
fields: [
"id",
"description",
"status",
"variants.sku",
"variants.barcode",
"variants.material",
"variants.options.value",
"variants.prices.amount",
"variants.prices.currency_code",
"variants.inventory_items.inventory.sku",
"variants.inventory_items.inventory.description",
],
filters: {
"variants.sku": { $like: "%-1" },
"variants.prices.amount": { $gt: 30 },
},
pagination: {
order: {
"variants.prices.amount": "DESC",
},
},
})
```
This query return all products where at least one variant has the title ending in `-1` and at least one price bigger than `30`.
The Index Module only hold the data used to paginate and filter, and the returned object is:
```json
{
"id": "prod_01JKEAM2GJZ14K64R0DHK0JE72",
"title": null,
"variants": [
{
"id": "variant_01JKEAM2HC89GWS95F6GF9C6YA",
"sku": "extra-variant-1",
"prices": [
{
"id": "price_01JKEAM2JADEWWX72F8QDP6QXT",
"amount": 80,
"currency_code": "USD"
}
]
}
]
}
```
All the rest of the fields will be hydrated from their respective modules, and the final result will be:
```json
{
"id": "prod_01JKEAY2RJTF8TW9A23KTGY1GD",
"description": "extra description",
"status": "draft",
"variants": [
{
"sku": "extra-variant-1",
"barcode": null,
"material": null,
"id": "variant_01JKEAY2S945CRZ6X4QZJ7GVBJ",
"options": [
{
"value": "Red"
}
],
"prices": [
{
"amount": 20,
"currency_code": "CAD",
"id": "price_01JKEAY2T2EEYSWZHPGG11B7W7"
},
{
"amount": 80,
"currency_code": "USD",
"id": "price_01JKEAY2T2NJK2E5468RK84CAR"
}
],
"inventory_items": [
{
"variant_id": "variant_01JKEAY2S945CRZ6X4QZJ7GVBJ",
"inventory_item_id": "iitem_01JKEAY2SNY2AWEHPZN0DDXVW6",
"inventory": {
"sku": "extra-variant-1",
"description": "extra variant 1",
"id": "iitem_01JKEAY2SNY2AWEHPZN0DDXVW6"
}
}
]
}
]
}
```
Co-authored-by: Adrien de Peretti <25098370+adrien2p@users.noreply.github.com>
Closes: FRMW-2892, FRMW-2893
**What**
Wired up the building block that we merged previously in order to manage data synchronization. The flow is as follow
- On application start
- Build schema object representation from configuration
- Check configuration changes
- if new entities configured
- Data synchronizer initialize orchestrator and start sync
- for each entity
- acquire lock
- mark existing data as staled
- sync all data by batch
- marked them not staled anymore
- acknowledge each processed batch and renew lock
- update metadata with last synced cursor for entity X
- release lock
- remove all remaining staled data
- if any entities removed from last configuration
- remove the index data and relations
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>