-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache.modify refetches whole query when I try to update cache #7105
Comments
From docs: try to add const [moveItem] = useMutation(MOVE_ITEM, {
update: (cache, { data: { moveItem } }) => {
cache.modify({
id: cache.identify(moveItem.subcategory),
broadcast: false,
fields: {
subcategoryItemConnection(){
/// ...cache update logic
}
},
});
},
});
}
} |
I tried adding it and it still refetches the whole query. |
@mpaus Have you see the You might not need to use import { relayStylePagination } from "@apollo/client/utilities"
new InMemoryCache({
typePolicies: {
// ...
Category: {
fields: {
subcategoryConnection: relayStylePagination(),
},
},
Subcategory: {
fields: {
subcategoryItemConnection: relayStylePagination(),
},
},
// ...
},
}); You'll need a field policy for each paginated field (like |
Hello, I'm having a slightly different issue. I'm using EDIT: I fixed it by switching to |
@benjamn I tried it, didn't seem to do anything for me, so I tried to write my own helper function for cache updating after mutation and I figured out that the case I'm trying to achieve doesn't seem to be doable (I may be wrong though). However I did find a solution, in the mutation I return the parent ID and refetch the fields i wanna update. Here is the code snippet:
I guess because I am updating a single existing parent entity and refetching the changed fields this seems to work automatically. However I still think a more optimal solution would just be to fetch the added subcategoryItem and append it to the cached Connection array instead of refetching the whole array. This is a good workaround because it refetches automatically only the fields I specify. |
Okay, I have the same issue now. If I'm adding a new comment in the edge array the whole query is refetch, but if I'm uncommenting the newly added comment everything works.
|
@pontusab were you able to find a solution? I'm building a chat with subscription and I would like to update my UI before server's response. My code is almost the same as yours. |
@vendramini No, still the same issue. |
@benjamn I'm sorry to ping you, but I've made some tests here using the official apollo issue reproduction together with my code, so I can compare the behavior. I can't figure out what's going on. And I've found some others posts here with the same topic, also in stackoverflow and spectrum chat. I don't know if we are not understanding and missing something how it works. Using the official repo, I'm able to use addPerson(
{
variables: { name },
optimisticResponse: {
addPerson: {
id: 'me',
name: 'also me',
is_optimistic: true,
}
}
}
); The I'm doing the same logic with my chat app, but getting the data from the server: const {
data: dataMessages,
loading: loadingMessages,
error,
} = useQuery(
messagesQuery,
{
variables: { targetUsername: 'test_username' },
}
);
const [
sendMessage,
{
loading: loadingSendMessage
}
] = useMutation(
sendMessageMutation,
{
update(cache, {data}) {
const message = data.sendMessage;
cache.modify(
{
fields: {
messages(prev = []) {
return [...prev, message];
}
}
}
);
}
}
); And the click button: sendMessage(
{
variables: {
username: 'test_username',
input: {
text: 'lorem ipsum dolor sit'
}
},
optimisticResponse: {
sendMessage: {
is_optimistic: true,
created_at: '2020-10-20T01:29:00.687Z',
id: "123321",
media: null,
text: 'lorem ipsum dolor sit',
__typename: "Message",
to: {
id: "cka8ppaws00cp0966e40ckhlr",
__typename: "User",
avatar: {
id: "ckec1lcyw00z8076623vgmk8b",
filename: 'cka8ppaws00cp0966e40ckhlr159848650442320200805_211353.jpg',
__typename: "File"
}
},
from: {
id: "ckdhtrk3i000n0866hhckskvg",
__typename: "User",
avatar: {
id: "ckdvsz6eh00tf0866b724h5xo",
filename: 'ckdhtrk3i000n0866hhckskvg1597504574221images.jpeg',
__typename: "File"
}
}
}
}
}
); When I'm adding the If I add the new message after the server's response, it works, the |
I think the problem isn't related with It says: "Like writeQuery and writeFragment, modify triggers a refresh of all active queries that depend on modified fields (unless you override this behavior)." but HOW to override this behavior if I've tried adding |
@vendramini I just solved this by returning the right structure from the backed, I guess the reason for the query to refresh is that the optimistic response is not the same as the actual one from the mutation and therefore it reloads the whole query. |
I tried with the same structure without success. I gave up and now I'm controlling what I need with a parallel Thank you :) |
I can confirm, experiencing exactly the same issue. If I update some specific [nested] fields manually on mutation, then it won't make sense to refetch the whole query. If I wanted to refetch the whole query I would call |
@ilyagru, make sure that the mutation response data is exactly what you expect. I had the same issue, but looking closer the response lacked some fields. |
Thanks @pontusab! It seems my issue has been resolved with |
I am having a similar issue except this only happens on root queries. For example, this triggers a refetch (favoriteReportsByUser is a root query): addToFavoritesMutation({
variables: { id, userId, reportId },
optimisticResponse: {
__typename: 'Mutation',
createUserFavoriteReport: {
__typename: 'UserFavoriteReport',
id,
userId,
reportId,
report: {
__typename: 'Report',
name: reportName,
},
},
},
update: (cache, { data }) => {
const favoriteId = data?.createUserFavoriteReport?.id;
const refId = cache.identify({ __typename: 'UserFavoriteReport', id: favoriteId });
cache.modify({
fields: {
favoriteReportsByUser: (existingRefs = {}, { readField }) => {
if (existingRefs.items.some((ref: any) => readField('id', ref) === favoriteId))
return existingRefs;
return {
...existingRefs,
items: [...existingRefs.items, { __ref: refId }],
};
},
},
});
},
}); This does not (userFavorites is a field inside of the Report type): addToFavoritesMutation({
variables: { id, userId, reportId },
optimisticResponse: {
__typename: 'Mutation',
createUserFavoriteReport: {
__typename: 'UserFavoriteReport',
id,
userId,
reportId,
report: {
__typename: 'Report',
name: reportName,
},
},
},
update: (cache, { data }) => {
const favoriteId = data?.createUserFavoriteReport?.id;
const refId = cache.identify({ __typename: 'UserFavoriteReport', id: favoriteId });
cache.modify({
id: cache.identify({ __typename: 'Report', id: reportId }),
fields: {
userFavorites: (existingRefs = {}, { readField }) => {
if (existingRefs.items.some((ref: any) => readField('id', ref) === favoriteId))
return existingRefs;
return {
...existingRefs,
items: [...existingRefs.items, { __ref: refId }],
};
},
},
});
},
}); I have added both:
to my favorite reports query hook but this doesn't seem to have any effect when updating a root query. I'm also not sure if this is expected behavior. |
I had the same issue and it turned out that mismatch in data structure caused another network request to be fired after the cache update. Changing fetchPolicy on the initial query to "cache-only" made this visible in the console as I got some warnings after I executed the manual cache update. I used some aliasing on underling connection (childrenLimited) and it looks like cache.modify can't handle that well:
Thanks @pontusab for pointing in the right direction. |
@milosdavidovic |
Thanks @benjamn. Yes, I've removed the alias, but also had to remove the |
We had that issue today and the problem was that the result from the mutation didn't have the same fields the query had. Our query was like this: query {
comments {
edges {
id
content
totalReplies
}
}
} Our mutation was like this: mutation {
addComment {
id
content
}
} So, when we called cache.modify and added a reference to the recently created comment, the fields didn't match and the query was refetched. cache.modify({
fields: {
comments(previous, { toReference }) {
return {
...previous,
edges: [toReference(newComment), ...previous.edges],
};
},
},
}); So we updated our mutation to return mutation {
addComment {
id
content
totalReplies
}
} Our queries and mutations actually have a more complex structure, but I stripped it down to make the example more readable. |
I've managed to fix the issue. Since I reported this issue on 1st of October I don't really remember what my mutation was returning back then or how I tried to update the cache exactly. But this is the code that works for me now, the mutation response looks like this:
The cache update looks like this:
My conclusion is either the subcategory object I used in cache.identify was not returned correctly or I wasn't returning correctly structured data back to the subcategoryItemConnection field. |
This was happening to me too, after reading this issue i found indeed the problem was when you run a cache.modify with a fragment structure that is not identical to that of the schema. I was omiting some optional fields, and was getting a refetch. Then i added the fields and it worked. weird behaviour. I would have espected the ApolloClient to understand optional fields... |
Thanks a lot! Took me hours to figure this out. 👍 |
Hey everyone, I have a little issue when trying to update the cache using the new cache.modify feature.
My GraphQL query looks like this:
When I add a new item to the subcategoryItemConnection and try to update cache after mutation, the cache.modify refetches the entire query as soon as I access a field of a store element, here is the code:
So as soon as I access the subcategoryItemConnection field the query gets refetched, any logic that I write inside the modifier function gets ignored. The way I understand it, the modifier function shouldn't refetch the query it should do the opposite, allow me to update the cache without refetching all data from the backend. Can someone please tell me what the issue is or if I am understanding something wrong?
Thanks
The text was updated successfully, but these errors were encountered: