Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

external smb/cifs storages not (always?) refreshing #20898

Closed
kkplein opened this issue May 10, 2020 · 52 comments
Closed

external smb/cifs storages not (always?) refreshing #20898

kkplein opened this issue May 10, 2020 · 52 comments
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap bug needs info

Comments

@kkplein
Copy link

kkplein commented May 10, 2020

Nextcloud 18.04, debian 10 with stock smbclient / php-smbclient, connected to AD/ldap.

Configured external storages pointing to our internal fileserver, using cifs/smb, Log-in credentials, save in session.

If user has no rights on a shared folder, the directory is displayed as an Empty directory. ("No files in here")

The user requests access, we add the user to the AD group, user logs out NC, logs in NC again, tries again to access the same folder, but STILL: "No files in here". However, the user has access now, as we can confirm through windows fileserver access and also when manually using smbclient from the nextcloud server, the user CAN see and access the directory contents now.

In NC, the user has to create a NEW file in that (seemingly empty) directory, and then suddenly the contents is refreshed, and all existing files/folders show up.

So, NC was showing old, cached "No files in here" info.
Browser refresh button etc, makes no difference.

We have:

  • user backend: AD/LDAP
  • configured the smb/cifs storage to "Check for changes once every direct access".
  • config.php contains: filesystem_check_changes = 1

We are running occ files:scan --all, but we guess it doesn't work, as the smbclient storage is only mounted on demand, and not always available to be scanned.

Operating system:
Debian 10.3

Web server:
Apache 2.4.38

Database:
mariadb 10.3.22

PHP version:
7.3.14

Nextcloud version: (see Nextcloud admin page)
18.04

Updated from an older Nextcloud/ownCloud or fresh install:
fresh

Where did you install Nextcloud from:
regular download from nextcloud

Login as admin user into your Nextcloud and access 
http://example.com/index.php/settings/integrity/failed 
No errors were found

List of activated apps:

App list Enabled: - accessibility: 1.4.0 - admin_audit: 1.8.0 - bruteforcesettings: 1.6.0 - cloud_federation_api: 1.1.0 - comments: 1.8.0 - dav: 1.14.0 - federatedfilesharing: 1.8.0 - federation: 1.8.0 - files: 1.13.1 - files_external: 1.9.0 - files_pdfviewer: 1.7.0 - files_rightclick: 0.15.2 - files_sharing: 1.10.1 - files_trashbin: 1.8.0 - files_versions: 1.11.0 - files_videoplayer: 1.7.0 - logreader: 2.3.0 - lookup_server_connector: 1.6.0 - nextcloud_announcements: 1.7.0 - notifications: 2.6.0 - oauth2: 1.6.0 - privacy: 1.2.0 - provisioning_api: 1.8.0 - recommendations: 0.6.0 - serverinfo: 1.8.0 - settings: 1.0.0 - sharebymail: 1.8.0 - support: 1.1.0 - survey_client: 1.6.0 - systemtags: 1.8.0 - text: 2.0.0 - theming: 1.9.0 - twofactor_backupcodes: 1.7.0 - updatenotification: 1.8.0 - user_ldap: 1.8.0 - viewer: 1.2.0 - workflowengine: 2.0.0

Nextcloud configuration:

Config report { "system": { "lost_password_link": "https:\/\/id....removed", "auth.bruteforce.protection.enabled": true, "overwritehost": "cloud....removed", "overwriteprotocol": "https", "htaccess.RewriteBase": "\/", "filesystem_check_changes": 1, "instanceid": "***REMOVED SENSITIVE VALUE***", "passwordsalt": "***REMOVED SENSITIVE VALUE***", "secret": "***REMOVED SENSITIVE VALUE***", "trusted_domains": [ "cloud....removed" ], "memcache.local": "\\OC\\Memcache\\Redis", "memcache.locking": "\\OC\\Memcache\\Redis", "redis": { "host": "***REMOVED SENSITIVE VALUE***", "port": 0, "timeout": 0, "password": "***REMOVED SENSITIVE VALUE***", "dbindex": 0 }, "datadirectory": "***REMOVED SENSITIVE VALUE***", "dbtype": "mysql", "version": "18.0.4.2", "overwrite.cli.url": "https:\/\/cloud....removed", "dbname": "***REMOVED SENSITIVE VALUE***", "dbhost": "***REMOVED SENSITIVE VALUE***", "dbport": "", "dbtableprefix": "oc_", "mysql.utf8mb4": true, "dbuser": "***REMOVED SENSITIVE VALUE***", "dbpassword": "***REMOVED SENSITIVE VALUE***", "installed": true, "ldapIgnoreNamingRules": false, "ldapProviderFactory": "OCA\\User_LDAP\\LDAPProviderFactory", "mail_smtpmode": "smtp", "mail_smtphost": "***REMOVED SENSITIVE VALUE***", "mail_sendmailmode": "smtp", "mail_smtpport": "465", "maintenance": false, "app_install_overwrite": [ "rainloop" ] } }

Are you using external storage, if yes which one: local/smb/sftp/...
yes: smbclient

Are you using encryption: yes/no
No

Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
Yes, LDAP.

LDAP configuration (delete this part if not used)

If needed< i can post LDAP config details, but I'd rather not. Everything group/ldap/auth seems to work fine, for the rest.

I think details are complete as I provided them. Please let me know if you need additional details.

@kkplein kkplein added 0. Needs triage Pending check for reproducibility or if it fits our roadmap bug labels May 10, 2020
@Ravinou
Copy link

Ravinou commented May 12, 2020

I have a similar problem (i'm on 18.0.4). If local storage is inaccessible for a few minutes, it won't refresh. The external storage will have to be deleted and recreated. This will result in the loss of all public links created in the external storage.

@Ravinou
Copy link

Ravinou commented May 13, 2020

Running "cron.php" not work for me.
Running "occ files:scan --all" works for me but all share links are removed !!

What is doing the option : "Check for changes : once every direct access" ?

Thanks

@thorgahr
Copy link

Same issue here with external storage support app, Samba share integrated into nextcloud.

For example:

A directory with five files, three of them added using the Windows Explorer (SMB/CIFS Share), and two of the files added using the nextcloud frontend (no difference if browser or app is used).
In Windows Explorer, all five files are available, in nextcloud are only the two added over nextcloud displayed.
When "occ files:scan" is run on the described directory, only the two nextcloud files are found (1 folder, 2 files). When the content of the directory is listed by use of "ls" on the server, all five files are displayed.
I have checked the access rights for filesystem, apache service, occ tool and so on, it seems all correct for me.

@RHCPNG
Copy link

RHCPNG commented Aug 16, 2020

I believe I have the same problem. I'm running nextcloud in a docker and I have a nfs share attached to the container, which is added to nextcloud as external storage. However, when I add files without using nextcloud, the files are not visible in nextcloud.

I have tried to set the permissions, owner as the docker user and 775 permissions, to no success. I have also run the occ file scan.

@kkplein
Copy link
Author

kkplein commented Aug 16, 2020

It's a pity there is so little activity on this ticket. :-( If only there would be a way to completey disable any directory contents caching...

@Zeoic
Copy link

Zeoic commented Aug 17, 2020

I am having the same/similar issue as well (mine is a local external storage pointed to an smb share added by the container. SMB external storages are brutally slow). I need to connect to my container and run the occ scan any time I add a file via smb. The folder is 32 TB with 280,000 files, so a full scan would take ages to complete, which means anytime I add a file and want it in next cloud I need to specify what folder the file change was in as well. It would be nice if the "Check for changes: Once every direct access" actually did anything, or at the very least, a re-scan current folder button was added.

@mr-prud
Copy link

mr-prud commented Aug 17, 2020

Hi,
Same behavior with FTP external_files type.

@Ravinou
Copy link

Ravinou commented Aug 25, 2020

Regardless of the protocol, the "Check for changes once every direct access" feature does not work and if unfortunately there is a connection problem with the external storage, upon reconnection all shares created inside this external storage are deleted!

@Ravinou
Copy link

Ravinou commented Aug 25, 2020

I just upgraded to Nextcloud 19.01 and option "Check for changes once every direct access" seems working, somebody else can confirm ?
*Edit : this is still not optimal and in case of temporary disconnection of the external storage some directories still appear empty after reconnection. Shares are always deleted in case of temporary disconnection.
So it's still unusable for me. I hope that one day this ticket will be read 😅 🙏

@kkplein
Copy link
Author

kkplein commented Aug 25, 2020

Just checked, and YES, it the contents of the share in nextcloud refreshed, after updating the cifs share outside nextcloud. So that bit seems to be solved. I have not yet noticed your issue of losing the defined shares, when the external storage becomes disconnected temporarily.

@Ravinou
Copy link

Ravinou commented Aug 25, 2020

@kkplein
1- create a public share on a file or on a folder in your external storage.
2- Open the link in a tab.
3- Make your external storage inaccessible (without deleting it in the nextcloud's external storage options, just make sure your nextcloud can't access it anymore).
4- Refresh your tab open on step 2. Normally there will be an error, and it's normal.
5- Make your external storage accessible again.
6- Now, public link open on step 2 is delete for me and all shares created in this external storage are deleted.

@Ravinou
Copy link

Ravinou commented Aug 25, 2020

I'm not a developper, but i find this :

// valid Nextcloud instance means that the public share no longer exists

// valid Nextcloud instance means that the public share no longer exists
// since this is permanent (re-sharing the file will create a new token)
// we remove the invalid storage

Would that explain what I'm experiencing ?

@kkplein
Copy link
Author

kkplein commented Aug 25, 2020

Seems it would... a bit drastic to remove it completely...

@sgohl
Copy link

sgohl commented Sep 9, 2020

I confirm that this problem still exists in the current version of nextcloud (19.0.2) – any external storages not showing directory content reliably. I can keep refreshing the page like an idiot, but still "No files in here".

I'm running the official nextcloud docker image with nfs docker volumes mounted into /mnt and having External Storage defined as local.

If this problem is based on caching, it would absolutely solve the problem if there's a change to just completely disable caching. It creates more problems than it solves, especially when the external storages contain massive amounts of files and directories and are being updated outside of nextcloud on a regular basis (in terms of seconds).

In this case, occ files:scan --all is absolutely NOT a solution, nor workaround. How to disable cache and just use the storage as it is?

@Ravinou
Copy link

Ravinou commented Sep 9, 2020

hi @sgohl can you tell me if you experiment this too : #20898 (comment)

Thanks to you !

@sgohl
Copy link

sgohl commented Sep 9, 2020

hi @sgohl can you tell me if you experiment this too : #20898 (comment)

I'm afraid but I have not enabled the Sharing functionality on External Storages since access is solely role-based.

But I think your described behavior is not particularly the same problem as the OP describes, which is unreliable Refreshing.
And based on the linked code, it seems like works-as-designed.

@Zeoic
Copy link

Zeoic commented Sep 9, 2020

I recently found out why mine was not updating. It was in fact trying to update on direct access, but the share was so large that the update just took so long that it effectively wasn't updating. I imagine the update on direct access feature only updates the entire external storage, and not the folder in inside that you are accessing.

I created a second external storage that went to a deeper folder with only a few files in it, and it updated instantly every direct access like expected. So my workaround is to make multiple external storages for often accessed folders, and it has been working well.

So in the end, external storage is just not very viable for larger folders.

@sgohl
Copy link

sgohl commented Sep 9, 2020

updates the entire external storage, and not the folder in inside that you are accessing

i think this is the exact problem I am also facing. So I am wondering what this "update" thing is actually all about.
Why can't it be configured to just do bloody classic ls like the simplest of file managers or plain ftp would be able to?

Then directory listing would take it's time? Fine, I'll take it

@Zeoic
Copy link

Zeoic commented Sep 9, 2020

In their defence, it does make sense. Nextcloud shows sizes of the external storages (and I assume other information) that could not be known without a full sync of the external storage. They cant really know that only that one file was changed until they check it all. I think a decent fix / work around would be to have one major full sync on direct acces, but also to simultaniously run another non-recursive sync on the directory you are directly accessing in another thread. That way the whole external storage will eventually be updated, and user experience would not suffer.

@sgohl
Copy link

sgohl commented Sep 9, 2020

Nextcloud shows sizes of the external storages

This is metadata and can/should be gathered asynchronously/independently without influencing the way files are accessed.

could not be known without a full sync of the external storage

It would be absolutely fine to mimic a df once a day. In my opinion this does not require a full scan of all files. At least, one should be able to disable if unwanted.

They cant really know that only that one file was changed until they check it all

Why? In a normal file manager you also do not know that one file was changed until you access the folder/file to check it.
Just imagine Desktop File Managers would cache your whole hard disk. To what extent? Ok, file systems have journals and stuff, but you don't expect this from ftp neither. And as much as I hate to say it, ftp is more reliable at least in that way. When I enter a remote directory, I get the exact directory listing at that point in time, and not something cached/obsolete/missing data.

I think a decent fix / work around would be to have one major full sync on direct acces, but also to simultaniously run another non-recursive sync on the directory you are directly accessing in another thread. That way the whole external storage will eventually be updated, and user experience would not suffer.

complicated, but yes that would do it. Or just enable a switch to disable that fancy bollocks.

@Zeoic
Copy link

Zeoic commented Sep 9, 2020

It would be absolutely fine to mimic a df once a day. In my opinion this does not require a full scan of all files. At least, one should be able to disable if unwanted.

That could work if nextcloud wanted to show you the size of a disk on your system, but thats not really the goal. Getting the size of a certain folder requires a recursive check of every file's size, then it adds it all together. This is how it works on any operating system.

Why? In a normal file manager you also do not know that one file was changed until you access the folder/file to check it.
Just imagine Desktop File Managers would cache your whole hard disk. To what extent? Ok, file systems have journals and stuff, but you don't expect this from ftp neither. And as much as I hate to say it, ftp is more reliable at least in that way. When I enter a remote directory, I get the exact directory listing at that point in time, and not something cached/obsolete/missing data.

Nextcloud is not just a simple file explorer. Nextcloud has its own database entries for files and folders under its care, and these are used for things like syncing and file sharing with other users. This means that in order for it to perform its core features, it needs to run the scan and update its own database.

complicated, but yes that would do it. Or just enable a switch to disable that fancy bollocks.

yes, that would help some cases too, but could very well cause inconsistencies with the syncing and sharing features.

@sgohl
Copy link

sgohl commented Sep 9, 2020

anyway, how is that file:scan command meant to be used with external storage where it can't get the user id from the path?

www-data@efb6164e372a:~/html$ ./occ files:list 
+----------+-------------------------------+---------+---------------------+------------------------------------------+---------------------------------+------------------+-----------------------------------+
| Mount ID | Mount Point                   | Storage | Authentication Type | Configuration                            | Options                         | Applicable Users | Applicable Groups                 |
+----------+-------------------------------+---------+---------------------+------------------------------------------+---------------------------------+------------------+-----------------------------------+
| 9        | /LITS                         | Local   | None                | datadir: "\/mnt\/lits"                   | previews: false                 |                  | admin, developers, mount_dev      |

www-data@efb6164e372a:~/html$ ./occ files:scan --path="/mnt/lits"
Unknown user 1 mnt
+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 0       | 0     | 00:00:00     |
+---------+-------+--------------+

@szaimen
Copy link
Contributor

szaimen commented Sep 9, 2020

if I remember correctly the path attribute must include the userid.
So something like this:
occ files:scan --path="/userid/files/name-of-external-storage"
so if one user that has access to the mount is e.g. Marco (which should be his userid)
Then the exact command with your Mount Point /LITS should be:
occ files:scan --path="/Marco/files/LITS"
If you scan it for Marco, it will be automatically refreshed for all users that have access to this mount.

@sgohl
Copy link

sgohl commented Sep 9, 2020

occ files:scan --path="/Marco/files/LITS"

Yes I know that you CAN use it that way. You see it in the path. No user involved at all. As said, nfs is mounted via docker volume into /mnt

Why would I bind an External Storage to a user? The whole sense of external Storage to me is that it is available globally and it does not belong to any user. Users just have access to the storage by being member in a group which is then defined here:

image

I don't think I am doing something wrong here since this is an elegant solution and if it wouldn't be meant that way, why is this group-based access available?
If you have an unknown amount of users accessing this storage, that would mean you had to scan the same storage for every user. !?

image

doesn't work either.

@szaimen
Copy link
Contributor

szaimen commented Sep 9, 2020

Why would I bind an External Storage to a user? The whole sense of external Storage to me is that it is available globally and it does not belong to any user. Users just have access to the storage by being member in a group which is then defined here:

You misunderstood me. You can just scan it for any user e.g. in the admin group. Then it will automatically refresh for all users that have access to the storage.

@sgohl
Copy link

sgohl commented Sep 9, 2020

oh, I see, it means the path how a user is seeing it ... thanks, that works! Now I can at least workaround a tiny bit around the problem.

one major full sync on direct acces, but also to simultaniously run another non-recursive sync on the directory you are directly accessing in another thread

this would then be indeed the most viable solution, as long as the sync is done before the user gets the directory listing

@Ravinou
Copy link

Ravinou commented Sep 10, 2020

So my workaround is to make multiple external storages for often accessed folders, and it has been working well.

It's interesting even if you admit it's not a solution except for a specific case ?
I hope this ticket will be read and supported one day.

@kkplein
Copy link
Author

kkplein commented Sep 10, 2020

The discussion is all very interesting, but I don't think the workaround applies to our case. We use the session credentials to authenticate to the external storage. (a samba file server) Any kind of sync can only happen with the logged-on credentials. This is why we would simply like to disable any kind of caching, etc.
We have also disabled sharing on those external storages, so for us, the most basic kind of access would be perfect, and we would like to disable ANY form of database or cache to be in between.
And as Ravinou says: Let's hope this gets addressed by the nextcloud team...

@adrfantini
Copy link

Just chiming in to share my experience. I added a new folder to my (large) local external storage yesterday, and it still did not show up in the UI, despite trying over and over again to access it as different users, to rename it, to add new files to it, to check its permissions, etc. The only thing that fixed it was manually scanning it:
sudo -u abc ./occ files:scan --path="/myuser/files/myshare/myfolder"

@tylerlucas
Copy link

Same issue. I deleted a file from the NextCloud UI, then restored it using a backup tool (Duplicacy). File appears via Ubuntu CLI, but does not appear in NextCloud.

What's the solution?

@rgl1234
Copy link

rgl1234 commented Dec 12, 2020

I have the same issue but only after upgrading to V20. It was working correctly until V19.05
see also #23988
Hope that this bug will be fixed soon....

@tylerlucas
Copy link

tylerlucas commented Dec 12, 2020 via email

@adrfantini
Copy link

This continues to happen to me quite a lot. I can test things if needed. I am running the linuxserver's nextcloud docker on Unraid.

@rwakcjr
Copy link

rwakcjr commented Jan 12, 2021

I am using docker with local mount(NFS share to another server) and it seems the cron container must have the same mount as the app one.

Solution found in this thread
nextcloud/docker#426

@rgl1234
Copy link

rgl1234 commented Jan 24, 2021

Can someone check, if the source of this problem is the same as mentioned on #23988 ?
Thanks

@szaimen
Copy link
Contributor

szaimen commented Jun 22, 2021

They look differently to me.

@szaimen
Copy link
Contributor

szaimen commented Aug 8, 2021

I suppose this ist still valid on NC21.0.4?

@ghost
Copy link

ghost commented Sep 7, 2021

This issue has been automatically marked as stale because it has not had recent activity and seems to be missing some essential information. It will be closed if no further activity occurs. Thank you for your contributions.

@ghost ghost added the stale Ticket or PR with no recent activity label Sep 7, 2021
@q-wertz
Copy link

q-wertz commented Sep 7, 2021

I think this will be fixed by #28377

@ghost ghost removed the stale Ticket or PR with no recent activity label Sep 7, 2021
@skjnldsv
Copy link
Member

skjnldsv commented Sep 7, 2021

Yep

@skjnldsv
Copy link
Member

skjnldsv commented Sep 7, 2021

Duplicate of #23988

@skjnldsv skjnldsv marked this as a duplicate of #23988 Sep 7, 2021
@skjnldsv skjnldsv closed this as completed Sep 7, 2021
@ciberboy
Copy link

I have the very same issue. Scan command works, but definitely not a solution nor a workaround.

@razvanmarcus
Copy link

Same issue with an external USB mounted disk. Next Cloud version 24.0.4

@DrLecter
Copy link

Same issue between sibling VPSs, external storage configured via SFTP with public key based auth. Creating a new folder then exiting and deleting it is the only GUI driven way to force a rescan. A "refresh" button could mitigate...
I'm running NC 23.0.11 from TurnkeyLinux LXC on a PVE

@yunyuyuan
Copy link

yunyuyuan commented Nov 29, 2022

Same issue,I'm using docker-compose, and have mounted an USB disk to nextcloud container:

    volumes:
      - /mnt/usb:/var/Storage

then add /var/Storage to External storages.If I eject the USB and insert it again(I have configured fstab to auto mount),folders at nextcloud will been empty,even I tried to run docker-compose down && docker-compose up -d.

btw, docker exec -i next-cloud-app-1 ls /var/Storage/ can list directories correctly.

@rwakcjr
Copy link

rwakcjr commented Dec 6, 2022

Do you also use an additional container for nextcloud cron tasks? If you do, mount the same volume to the same path in the cron container as well.

@alexunderboots
Copy link

Source data:
"Empty folders" in smb/external storage
"Space" in sharename
mount does not mount with any variations of "space"
docker-compose

The solution that helped me:
fstab in hostOS
+volume in docker-compose.yml
add local/externalstorage
in hostOS to change the owner of each problematic folder to www-data

@alex-c3
Copy link

alex-c3 commented May 1, 2024

Same issue afaik :
After having set up a functional External Storage with CIFS, adding a file to said storage from file explorer will most of the time not show it in the Nextcloud web interface.

Nextcloud 28.0.2

@UserX404
Copy link

UserX404 commented May 11, 2024

add local/externalstorage
in hostOS to change the owner of each problematic folder to www-data

This doesn't survive a reboot due to my OS is changing permitions back on boot-up
Note: running Nextcloud on Qnap as docker application and want to mount the 'Multimedia' shared folder.

@rgl1234
Copy link

rgl1234 commented May 11, 2024

Problem seems to be solved in v29

@alex-c3
Copy link

alex-c3 commented May 11, 2024

Seems to be solved for me too in Nextcloud 29

@gitwittidbit
Copy link

I am currently having this (or a very similar) issue in 29.0.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap bug needs info
Projects
None yet
Development

No branches or pull requests