-
-
Notifications
You must be signed in to change notification settings - Fork 464
Description
Are you willing to provide a PR for this issue or aid in developing it?
Yes.
What problem does this feature proposal attempt to solve?
Query validation takes a lot of time and resources (about 30-40% of processing time). However, a typical website uses a limited amount of queries that are compiled on the client. So we may cache the result of query validation, and identical queries will be processed much faster.
Which possible solutions should be considered?
There are only two hard things in Computer Science: cache invalidation and naming things. --Phil Karlton
In order to cache things, we have to figure out when to invalidate this cache. The validation is performed in the webonyx/graphql-php GraphQL::promiseToExecute()
Here is what it depends on:
- Schema
- Query
- Validation rules
The first two are presented as simple AST. The schema is already cached, and I opened #2017 to cache queries. So we can easily realize when to invalidate them.
The latter is more difficult. Rules are php classes that can't be easily tracked. They can also depend on any other data or code. For example, the QueryComplexity validator depends on all passed query variables.
I guess we can add a function getHash() to the ProvidesValidationRules interface. It should return the hash of all the data it depends on. So if anything is changed, we will be able to check it and invalidate the cache. For the default lighthouse ValidationRulesProvider it should return the hash of corresponding config values + query variables if the QueryComplexity validator is enabled.
Given these 3 hashes, we can store the result of validation and be assured it has not been changed.
I guess we should also extract the validation method in the webonyx/graphql-php GraphQL::promiseToExecute() to execute it independently.
Do you have any objections or suggestions? This may improve Lighthouse performance significantly.