CEPFS supports snapshot function, and usually uses mkdir command to create snapshot directory. Please note that this is a hidden special directory and is not visible in the directory list.
Usually, snapshots are just as the name implies: they keep the state of data in the process of change. It is important to note that some features of CEPFS snapshots are different from what you expected:
By default, the CEPFS snapshot feature is enabled on the new file system. To enable it on an existing file system, use the following command.
When snapshots are enabled, all directories in CephFS will have a special. Snapshot snapshot directory. (If you like, you can use the client snapdir settings to configure different names. )
To create a CephFS snapshot, create a subdirectory under. Bang. Create a snapshot with a name of your choice. For example, to create a snapshot in the directory "/1/2/3/", use mkdir/ 1/2/3/. Snapshot/My Snapshot Name command.
The client sends the request to the MDS server and then processes it in the server:: handle_client_mksnap (). It will assign a snapid from SnapServer, create a new inode and link it with the new SnapRealm, and then submit it to MDlog. Mdcache:: do _ realm _ invalid _ and _ update _ notify () will be triggered after submission, and this function will broadcast this SnapRealm to all clients that have jurisdiction over any file in the snapshot directory. After receiving the notification, the client will update the local SanpRealm hierarchy synchronously, and generate a new SnapContext for the new SnapRealm structure, which is used to write snapshot data to the OSD terminal. At the same time, the metadata of the snapshot will be updated to the OSD end (that is, sr_t) as a part of the directory information. The whole process is completely asynchronous.
If you delete a snapshot, a similar process will be performed. If an inode is deleted from its parent SnapRealm, the renaming code will create a new SnapRealm for the renamed inode (if SnapRealm does not exist), save the valid snapshot ID on the original parent SnapRealm into the parent snapshot (past _ parent _ snapshots) of the new SnapRealm, and then follow a similar process to create a snapshot.
RADOS SnapContext consists of a snapshot sequence ID(snapid) and an object containing all snapshot IDs. To generate this list, we combine the SnapID associated with SnapRealm with all valid SnapIDs in the parent snapshot. Valid snapshots cached by SnapClient filter out expired snapshot id.
File data is stored in a "self-managed" snapshot of RADOS. When writing file data to OSD, the client will be careful to use the correct SnapContext.
The entries of the snapshot (and its inodes) are stored online as part of the directory where the snapshot was taken. All dentries include the first and last valid snapid. Dentries that are not snapshots will eventually be set to CEPH_NOSNAP.
There is a lot of code that can handle writebacks effectively. When the client receives the MClientSnap message, it updates the local SnapRealm representation and its link to a specific Inode, and generates a CapSnap for that Inode. CapSnap is cleared as part of the function writeback. If there is dirty data, CapSnap will be used to prevent writing new data until the snapshot is completely cleared to OSD. At MDS, we generate snapshots representing teeth as part of the routine process of tooth cleaning. Dentures with prominent CapSnap data are fixed and recorded in the journal.
Delete the snapshot by calling "rmdir" in the root directory of the snapshot. Hey. " (An attempt to delete a directory where the root snapshot will fail; You must delete the snapshot first. Once deleted, they will be entered into the deleted snapshot list of OSDMap, and the file data will be deleted by OSD. When the directory object is read in and written back again, the metadata will be cleared.
An information node with multiple hard links is moved to a virtual global SnapRealm. Virtual SnapRealm overwrites all snapshots in the file system. The inode's data will be preserved for any new snapshot. These retained data will overwrite the snapshot on any link of the inode.
It should be noted that the interaction between snapshots of CephFS and multiple file systems is problematic-each MDS cluster is assigned snappid separately, and if multiple file systems share a pool, snapshots will conflict. If the customer deletes a snapshot at this time, it will cause others to lose data, which will not promote the exception, which is one of the reasons why snapshots of CephFS are not recommended.
Create a snapshot:
To restore a file from a snapshot:
Automatic snapshot
Automatically create and delete old snapshots using cephfs-snap.
Download the file to /usr/bin
Used with cron. {hourly, daily, weekly, monthly}
Use example:
The created cron file must be set to be executable.
To verify that the configured cron task can be executed correctly, please run cron manually. * The script created in the above steps.
Now check whether the cephfs snapshot is already in. Snapshot directory.
If cron does not trigger the snapshot as expected, please verify the files "/usr/bin/cephfs-snap" and "/etc/cron". */cephfs-snap "is executable.
Reference article: