-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
external smb/cifs storages not (always?) refreshing #20898
Comments
I have a similar problem (i'm on 18.0.4). If local storage is inaccessible for a few minutes, it won't refresh. The external storage will have to be deleted and recreated. This will result in the loss of all public links created in the external storage. |
Running "cron.php" not work for me. What is doing the option : "Check for changes : once every direct access" ? Thanks |
Same issue here with external storage support app, Samba share integrated into nextcloud. For example: A directory with five files, three of them added using the Windows Explorer (SMB/CIFS Share), and two of the files added using the nextcloud frontend (no difference if browser or app is used). |
I believe I have the same problem. I'm running nextcloud in a docker and I have a nfs share attached to the container, which is added to nextcloud as external storage. However, when I add files without using nextcloud, the files are not visible in nextcloud. I have tried to set the permissions, owner as the docker user and 775 permissions, to no success. I have also run the occ file scan. |
It's a pity there is so little activity on this ticket. :-( If only there would be a way to completey disable any directory contents caching... |
I am having the same/similar issue as well (mine is a local external storage pointed to an smb share added by the container. SMB external storages are brutally slow). I need to connect to my container and run the occ scan any time I add a file via smb. The folder is 32 TB with 280,000 files, so a full scan would take ages to complete, which means anytime I add a file and want it in next cloud I need to specify what folder the file change was in as well. It would be nice if the "Check for changes: Once every direct access" actually did anything, or at the very least, a re-scan current folder button was added. |
Hi, |
Regardless of the protocol, the "Check for changes once every direct access" feature does not work and if unfortunately there is a connection problem with the external storage, upon reconnection all shares created inside this external storage are deleted! |
I just upgraded to Nextcloud 19.01 and option "Check for changes once every direct access" seems working, somebody else can confirm ? |
Just checked, and YES, it the contents of the share in nextcloud refreshed, after updating the cifs share outside nextcloud. So that bit seems to be solved. I have not yet noticed your issue of losing the defined shares, when the external storage becomes disconnected temporarily. |
@kkplein |
I'm not a developper, but i find this :
Would that explain what I'm experiencing ? |
Seems it would... a bit drastic to remove it completely... |
I confirm that this problem still exists in the current version of nextcloud (19.0.2) – any external storages not showing directory content reliably. I can keep refreshing the page like an idiot, but still "No files in here". I'm running the official nextcloud docker image with nfs docker volumes mounted into If this problem is based on caching, it would absolutely solve the problem if there's a change to just completely disable caching. It creates more problems than it solves, especially when the external storages contain massive amounts of files and directories and are being updated outside of nextcloud on a regular basis (in terms of seconds). In this case, |
hi @sgohl can you tell me if you experiment this too : #20898 (comment) Thanks to you ! |
I'm afraid but I have not enabled the Sharing functionality on External Storages since access is solely role-based. But I think your described behavior is not particularly the same problem as the OP describes, which is unreliable Refreshing. |
I recently found out why mine was not updating. It was in fact trying to update on direct access, but the share was so large that the update just took so long that it effectively wasn't updating. I imagine the update on direct access feature only updates the entire external storage, and not the folder in inside that you are accessing. I created a second external storage that went to a deeper folder with only a few files in it, and it updated instantly every direct access like expected. So my workaround is to make multiple external storages for often accessed folders, and it has been working well. So in the end, external storage is just not very viable for larger folders. |
i think this is the exact problem I am also facing. So I am wondering what this "update" thing is actually all about. Then directory listing would take it's time? Fine, I'll take it |
In their defence, it does make sense. Nextcloud shows sizes of the external storages (and I assume other information) that could not be known without a full sync of the external storage. They cant really know that only that one file was changed until they check it all. I think a decent fix / work around would be to have one major full sync on direct acces, but also to simultaniously run another non-recursive sync on the directory you are directly accessing in another thread. That way the whole external storage will eventually be updated, and user experience would not suffer. |
This is metadata and can/should be gathered asynchronously/independently without influencing the way files are accessed.
It would be absolutely fine to mimic a
Why? In a normal file manager you also do not know that one file was changed until you access the folder/file to check it.
complicated, but yes that would do it. Or just enable a switch to disable that fancy bollocks. |
That could work if nextcloud wanted to show you the size of a disk on your system, but thats not really the goal. Getting the size of a certain folder requires a recursive check of every file's size, then it adds it all together. This is how it works on any operating system.
Nextcloud is not just a simple file explorer. Nextcloud has its own database entries for files and folders under its care, and these are used for things like syncing and file sharing with other users. This means that in order for it to perform its core features, it needs to run the scan and update its own database.
yes, that would help some cases too, but could very well cause inconsistencies with the syncing and sharing features. |
anyway, how is that file:scan command meant to be used with external storage where it can't get the user id from the path?
|
if I remember correctly the path attribute must include the userid. |
Yes I know that you CAN use it that way. You see it in the path. No user involved at all. As said, nfs is mounted via docker volume into Why would I bind an External Storage to a user? The whole sense of external Storage to me is that it is available globally and it does not belong to any user. Users just have access to the storage by being member in a group which is then defined here: I don't think I am doing something wrong here since this is an elegant solution and if it wouldn't be meant that way, why is this group-based access available? doesn't work either. |
You misunderstood me. You can just scan it for any user e.g. in the admin group. Then it will automatically refresh for all users that have access to the storage. |
oh, I see, it means the path how a user is seeing it ... thanks, that works! Now I can at least workaround a tiny bit around the problem.
this would then be indeed the most viable solution, as long as the sync is done before the user gets the directory listing |
It's interesting even if you admit it's not a solution except for a specific case ? |
The discussion is all very interesting, but I don't think the workaround applies to our case. We use the session credentials to authenticate to the external storage. (a samba file server) Any kind of sync can only happen with the logged-on credentials. This is why we would simply like to disable any kind of caching, etc. |
Just chiming in to share my experience. I added a new folder to my (large) local external storage yesterday, and it still did not show up in the UI, despite trying over and over again to access it as different users, to rename it, to add new files to it, to check its permissions, etc. The only thing that fixed it was manually scanning it: |
Same issue. I deleted a file from the NextCloud UI, then restored it using a backup tool (Duplicacy). File appears via Ubuntu CLI, but does not appear in NextCloud. What's the solution? |
I have the same issue but only after upgrading to V20. It was working correctly until V19.05 |
I ended up uninstalling NextCloud and just moving to Samba. Much happier so far.
…On Dec 12, 2020, 12:08 AM -0800, rgl1234 ***@***.***>, wrote:
I have the same issue but only after upgrading to V20. It was working correctly until V19.05
see also #23988
Hope that this bug will be fixed soon....
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
This continues to happen to me quite a lot. I can test things if needed. I am running the linuxserver's nextcloud docker on Unraid. |
I am using docker with local mount(NFS share to another server) and it seems the cron container must have the same mount as the app one. Solution found in this thread |
Can someone check, if the source of this problem is the same as mentioned on #23988 ? |
They look differently to me. |
I suppose this ist still valid on NC21.0.4? |
This issue has been automatically marked as stale because it has not had recent activity and seems to be missing some essential information. It will be closed if no further activity occurs. Thank you for your contributions. |
I think this will be fixed by #28377 |
Yep |
Duplicate of #23988 |
I have the very same issue. Scan command works, but definitely not a solution nor a workaround. |
Same issue with an external USB mounted disk. Next Cloud version 24.0.4 |
Same issue between sibling VPSs, external storage configured via SFTP with public key based auth. Creating a new folder then exiting and deleting it is the only GUI driven way to force a rescan. A "refresh" button could mitigate... |
Same issue,I'm using docker-compose, and have mounted an USB disk to nextcloud container: volumes:
- /mnt/usb:/var/Storage then add btw, |
Do you also use an additional container for nextcloud cron tasks? If you do, mount the same volume to the same path in the cron container as well. |
Source data: The solution that helped me: |
Same issue afaik : Nextcloud 28.0.2 |
This doesn't survive a reboot due to my OS is changing permitions back on boot-up |
Problem seems to be solved in v29 |
Seems to be solved for me too in Nextcloud 29 |
I am currently having this (or a very similar) issue in 29.0.6 |
Nextcloud 18.04, debian 10 with stock smbclient / php-smbclient, connected to AD/ldap.
Configured external storages pointing to our internal fileserver, using cifs/smb, Log-in credentials, save in session.
If user has no rights on a shared folder, the directory is displayed as an Empty directory. ("No files in here")
The user requests access, we add the user to the AD group, user logs out NC, logs in NC again, tries again to access the same folder, but STILL: "No files in here". However, the user has access now, as we can confirm through windows fileserver access and also when manually using smbclient from the nextcloud server, the user CAN see and access the directory contents now.
In NC, the user has to create a NEW file in that (seemingly empty) directory, and then suddenly the contents is refreshed, and all existing files/folders show up.
So, NC was showing old, cached "No files in here" info.
Browser refresh button etc, makes no difference.
We have:
We are running occ files:scan --all, but we guess it doesn't work, as the smbclient storage is only mounted on demand, and not always available to be scanned.
Operating system:
Debian 10.3
Web server:
Apache 2.4.38
Database:
mariadb 10.3.22
PHP version:
7.3.14
Nextcloud version: (see Nextcloud admin page)
18.04
Updated from an older Nextcloud/ownCloud or fresh install:
fresh
Where did you install Nextcloud from:
regular download from nextcloud
List of activated apps:
App list
Enabled: - accessibility: 1.4.0 - admin_audit: 1.8.0 - bruteforcesettings: 1.6.0 - cloud_federation_api: 1.1.0 - comments: 1.8.0 - dav: 1.14.0 - federatedfilesharing: 1.8.0 - federation: 1.8.0 - files: 1.13.1 - files_external: 1.9.0 - files_pdfviewer: 1.7.0 - files_rightclick: 0.15.2 - files_sharing: 1.10.1 - files_trashbin: 1.8.0 - files_versions: 1.11.0 - files_videoplayer: 1.7.0 - logreader: 2.3.0 - lookup_server_connector: 1.6.0 - nextcloud_announcements: 1.7.0 - notifications: 2.6.0 - oauth2: 1.6.0 - privacy: 1.2.0 - provisioning_api: 1.8.0 - recommendations: 0.6.0 - serverinfo: 1.8.0 - settings: 1.0.0 - sharebymail: 1.8.0 - support: 1.1.0 - survey_client: 1.6.0 - systemtags: 1.8.0 - text: 2.0.0 - theming: 1.9.0 - twofactor_backupcodes: 1.7.0 - updatenotification: 1.8.0 - user_ldap: 1.8.0 - viewer: 1.2.0 - workflowengine: 2.0.0Nextcloud configuration:
Config report
{ "system": { "lost_password_link": "https:\/\/id....removed", "auth.bruteforce.protection.enabled": true, "overwritehost": "cloud....removed", "overwriteprotocol": "https", "htaccess.RewriteBase": "\/", "filesystem_check_changes": 1, "instanceid": "***REMOVED SENSITIVE VALUE***", "passwordsalt": "***REMOVED SENSITIVE VALUE***", "secret": "***REMOVED SENSITIVE VALUE***", "trusted_domains": [ "cloud....removed" ], "memcache.local": "\\OC\\Memcache\\Redis", "memcache.locking": "\\OC\\Memcache\\Redis", "redis": { "host": "***REMOVED SENSITIVE VALUE***", "port": 0, "timeout": 0, "password": "***REMOVED SENSITIVE VALUE***", "dbindex": 0 }, "datadirectory": "***REMOVED SENSITIVE VALUE***", "dbtype": "mysql", "version": "18.0.4.2", "overwrite.cli.url": "https:\/\/cloud....removed", "dbname": "***REMOVED SENSITIVE VALUE***", "dbhost": "***REMOVED SENSITIVE VALUE***", "dbport": "", "dbtableprefix": "oc_", "mysql.utf8mb4": true, "dbuser": "***REMOVED SENSITIVE VALUE***", "dbpassword": "***REMOVED SENSITIVE VALUE***", "installed": true, "ldapIgnoreNamingRules": false, "ldapProviderFactory": "OCA\\User_LDAP\\LDAPProviderFactory", "mail_smtpmode": "smtp", "mail_smtphost": "***REMOVED SENSITIVE VALUE***", "mail_sendmailmode": "smtp", "mail_smtpport": "465", "maintenance": false, "app_install_overwrite": [ "rainloop" ] } }Are you using external storage, if yes which one: local/smb/sftp/...
yes: smbclient
Are you using encryption: yes/no
No
Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
Yes, LDAP.
LDAP configuration (delete this part if not used)
I think details are complete as I provided them. Please let me know if you need additional details.
The text was updated successfully, but these errors were encountered: