**What**
I have removed the check for the context key where it was fetching all attributes available and then stripping out the one that does not exists.. On big dataset these would remove multiple hundreds of ms of query execution
* fix(workflow-engine-*): Prevent passing shared context reference
* fix(workflow-engine-*): Prevent passing shared context reference
* prevent tests from hanging
* fix event handling
* add integration tests
* use interval for scheduled in tests
* skip tests for now
* Create silent-glasses-enjoy.md
* fix cancel
* changeset
* push multiple aliases
* test multiple field alias
* increase wait time to index on test
---------
Co-authored-by: Carlos R. L. Rodrigues <37986729+carlos-r-l-rodrigues@users.noreply.github.com>
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
- Use the resource id filtering when possible instead of relying on programmatic intersection checks over potential hundreds thousands of resources from the link when in fact it is not necessary to fetch everything to check in memory but instead check in the db
- Also fix normalizeDataForContext middlewares
**What**
First iteration to prevent events from overwhelming the systems.
- Group emitted event ids when possible instead of creating a message per id which leads to reduced amount of events to process massively in cases of import for example
- Update the index engine to process event data in batches of 100
- Update event handling by the index engine to be able to upsert by batch as well
- Fix index engine build config for intermediate listeners inferrence
* feat: add createMultiple flag to enforce inApp link uniqueness
* changes
* mocks
* default
* many to many
---------
Co-authored-by: Carlos R. L. Rodrigues <rodrigolr@gmail.com>
**What**
Currently only cron pattern are supported by scheduled jobs, this can lead to issue. for example you set the pattern to execute every hours at minute 0 and second 0 (as it is expected to execute at exactly this constraint) but due to the moment it gets executed we our out of the second 0 then the job wont get executed until the next scheduled cron table execution.
With this pr we introduce the `interval` configuration which allows you the specify a delay between execution in ms (e.g every minute -> 60 * 1000 ms) which ensure that once a job is executed another one is scheduled for a minute later.
**Usage**
```ts
// jobs/job-1.ts
const thirtySeconds = 30 * 1000
export const config = {
name: "job-1",
schedule: {
interval: thirtySeconds
},
}
```
Fixes: FRMW-2930
This PR updates the MikroORM config to use the module name as the snapshot name when generating migration files. Otherwise, MikroORM defaults to the database name and every time you update the database name, it will create a new snapshot.
Also, we migrate existing snapshot files to be same the new file name to avoid breaking changes.
**What**
- Reworks how admin extensions are loaded from plugins.
- Reworks how extensions are managed internally in the dashboard project.
**Why**
- Previously we loaded extensions from plugins the same way we do for extension found in a users application. This being scanning the source code for possible extensions in `.medusa/server/src/admin`, and including any extensions that were discovered in the final virtual modules.
- This was causing issues with how Vite optimizes dependencies, and would lead to CJS/ESM issues. Not sure of the exact cause of this, but the issue was pinpointed to Vite not being able to register correctly which dependencies to optimize when they were loaded through the virtual module from a plugin in `node_modules`.
**What changed**
- To circumvent the above issue we have changed to a different strategy for loading extensions from plugins. The changes are the following:
- We now build plugins slightly different, if a plugin has admin extensions we now build those to `.medusa/server/src/admin/index.mjs` and `.medusa/server/src/admin/index.js` for a ESM and CJS build.
- When determining how to load extensions from a source we follow these rules:
- If the source has a `medusa-plugin-options.json` or is the root application we determine that it is a `local` extension source, and load extensions as previously through a virtual module.
- If it has neither of the above, but has a `./admin` export in its package.json then we determine that it is a `package` extension, and we update the entry point for the dashboard to import the package and pass its extensions a long to the dashboard manager.
**Changes required by plugin authors**
- The change has no breaking changes, but requires plugin authors to update the `package.json` of their plugins to also include a `./admin` export. It should look like this:
```json
{
"name": "@medusajs/plugin",
"version": "0.0.1",
"description": "A starter for Medusa plugins.",
"author": "Medusa (https://medusajs.com)",
"license": "MIT",
"files": [
".medusa/server"
],
"exports": {
"./package.json": "./package.json",
"./workflows": "./.medusa/server/src/workflows/index.js",
"./.medusa/server/src/modules/*": "./.medusa/server/src/modules/*/index.js",
"./modules/*": "./.medusa/server/src/modules/*/index.js",
"./providers/*": "./.medusa/server/src/providers/*/index.js",
"./*": "./.medusa/server/src/*.js",
"./admin": {
"import": "./.medusa/server/src/admin/index.mjs",
"require": "./.medusa/server/src/admin/index.js",
"default": "./.medusa/server/src/admin/index.js"
}
},
}
```