**What**
The context reference is being mutated by the repository leading to an empty context. Also, the filter is built from the pricing context instead of pricing context -> context leading to always fetch all preferences all the time
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* test(): test dynamic max workers
* Clarify test description and improve CI
* chore(): Upgrade mikro orm
* handle 'null' value for big number props
* 6.5.2
* remove only
* fix pricing module rule value
* switch select in strategy for balances
* revert to select in strategy for order module
* fix defining DML ManyToOne
* fix define relationship
* test fix
* more fixes
* change order strategy to balanced
* change order strategy to balanced
* prevent unnecessary manager fork
* revert generated www changes
* remove unnecessary changes
* Create real-cobras-deny.md
* address feedback
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
* chore(pricing): Fix excessive db queries during price sets update
* chore(pricing): Fix excessive db queries during price sets update
* finalize upsert with replace of the rules
* fix limit
* Create quiet-pumpkins-hang.md
---------
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
- Fix missing `ON DELETE CASCADE` constraint on order credit lines
- Fix `receiveReturn` miss usage
- Make all order integration tests to run and rename them all to `*.spec.ts`
- Fix package.json typo
* feat(index): add filterable fields to link definition
* rm comment
* break recursion
* validate read only links
* validate filterable
* gql schema array
* link parents
* isInverse
* push id when not present
* Fix ciruclar relationships and add tests to ensure proper behaviour (part 1)
* log and fallback to entity.alias
* cleanup and fixes
* cleanup and fixes
* cleanup and fixes
* fix get attributes
* gql type
* unit test
* array inference
* rm only
* package.json
* pacvkage.json
* fix link retrieval on duplicated entity type and aliases + tests
* link parents as array
* Match only parent entity
* rm comment
* remove hard coded schema
* extend types
* unit test
* test
* types
* pagination type
* type
* fix integration tests
* Improve performance of in selection
* use @@ to filter property
* escape jsonPath
* add Event Bus by default
* changeset
* rm postgres analyze
* estimate count
* new query
* parent aliases
* inner query w/ filter and sort relations
* address comments
---------
Co-authored-by: adrien2p <adrien.deperetti@gmail.com>
Co-authored-by: Oli Juhl <59018053+olivermrbl@users.noreply.github.com>
**What**
We found out that the pricing context from the cart always contains the entire cart, even though it is kind of wrong. The issue is that even though we improve the performances of the query, it will cost a lot to have hundreds of constraint for nothing potentially. For that reason, we cache the attributes in memory with the best possible query we can do to gather them and we renew them when we perform a calculate prices if it has been reset. That way, we ensure we don't have unnecessary checks on attributes that does not have rules.
Since we don't have the type table anymore which was doing that for us and until we have a proper caching layer it would do IMO. But the rules type table was very useful for these attributes findings
**What**
I have removed the check for the context key where it was fetching all attributes available and then stripping out the one that does not exists.. On big dataset these would remove multiple hundreds of ms of query execution
**What**
First iteration to prevent events from overwhelming the systems.
- Group emitted event ids when possible instead of creating a message per id which leads to reduced amount of events to process massively in cases of import for example
- Update the index engine to process event data in batches of 100
- Update event handling by the index engine to be able to upsert by batch as well
- Fix index engine build config for intermediate listeners inferrence