Detaching and re-attaching the drive clears and empties the local drive, reducing the "Used space" to almost nothing, and I can again start filling the cloud drive with data until the cache drive runs full again.Īs is obvious, a discrepancy exists between the amount of data reported as "Used space" on the SSD and the "Size on disk" of the CloudPart folder. Selecting "Performance -> Clear local cache" does nothing. Remember, there is absolutely nothing on this drive other than the CloudPart-folder. This is the local cache disk at the same time as the screenshot above. This is the CloudPart-folder after a session of having uploaded approx 30 GB of data. However, when the total amount of data uploaded is nearing the size of the cache drive, CloudDrive starts slowing down until it completely stops accepting writes and throws a red warning message saying that the local cache drive is full. When the local cache is filled up (6 GB) CD starts throttling write requests, as it should be doing (hooray for this feature, by the way). Local cache set to 6 GB FIXED on a 120 GB SSD (the SSD is exclusive to CloudDrive - there's absolutely nothing else on this drive) Configuration is:Ĥ TB drive created on Amazon Cloud Drive with 30 MB chunk size, custom security profile I've run into a situation where the local cache drive is getting filled up despite having a fixed local cache. Since a reboot solves the issue, could it be that CloudDrive needs to release the write handles on the (sparse) files so that the drive manager will let go of the reservation? Did anything? Hosting nothing but the CloudDrive cache, fsutil gives:Ĭ:\Windows\system32>fsutil fsinfo ntfsinfo s: I'm still experiencing the issues, which is preventing me from uploading more than the size of my cache drive (480GB) before I have to reboot the computer. TL DR: The problem is solved by restarting CloudDriveService, which flushes something to disk. It makes sense to me that Windows, to improve performance, might not compute new free space from sparse files until after the file handles are released – after all, free disk space is not a metric that mostly needs be accurate to prevent overfill and not the other direction. While the problem might be NTFS-related, I would claim that this would be relatively simple to mitigate by having CloudDrive release these file handles from time to time so that the file system can catch up on how much space is occupied by the sparse files. The issue does not re-appear until after I have added data again. It is clear to me that the issue is resolved when CloudDrive then releases/flushes these sparse files. … does actually resolve the issue and release the reserved space on the drive (which in effect was the same as a reboot accomplished), allowing me to continue adding data. To me, it seems like a piece of cake to just make CloudDrive release and re-attach cache file handles on a regular basis – for example, for every 25% of cache size writes and/or when the cache drive nears full or writes are beginning to get throttled. This indicates that the deallocated blocks are freed when the file handles are closed from CloudDrive. The reserved space is released when the CloudDrive service is restarted. Why is this issue so "rare"? Is it only a small subset of users that experience this bug? Does not everybody see this happen when they copy more data to cloud drive than the size of the cache?įor me, this makes CloudDrive incredibly difficult to use – I'm trying to upload about 8 TB of data, and given a 480 GB SSD as a cache drive, you can do the math on how many reboots I have to do, how many times I have to re-start the copy and how often I need to check the entire CloudDrive for data consistency (as CloudDrive crashes when the cache drive is full). I understand that the issue is caused by how NTFS handles sparse files. After some testing, I can confirm that the.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |