can you answer 10 and 11 please Explain the difference between basic linked allo
ID: 3849573 • Letter: C
Question
can you answer 10 and 11 please
Explain the difference between basic linked allocation of file blocks and the file allocation table (FAT block allocation scheme. The two most common methods for managing free space in a file system are bitmaps and linked free space lists. Give one benefit of bitmap over a linked free space list, and one benefit of a linked free space list over a bitmap. Explain why methods for ensuring reliability in file systems center on operations that write a single sector. A Linux variant called TxOS, developed at the University of Texas, supports transactions with shadowing by decomposing inodes into two parts: a header that contains infrequently modified data about each file, and a data component holding fields that are commonly modified by system calls. The header contains a pointer to the related data component, and the data component contains a pointer to the header. Explain how this inode organization makes it relatively easy to implement shadowing for changes to a file's metadata. Distributed file systems often allow nodes to cache local copies of shared data, using either write-through or write-back policies to handle changes to those data. List one benefit of using write-through file caching, and one benefit of using write-back file caching. A client process uses a binary search tree to store set of 32-bit integers, one integer per node. The 32-bit address of the tree's root node is passed as the only argument to a remote procedure call to be executed in a server process. If the binary search tree contains 15 values, what is the minimum amount of space required to marshal the parameter or parameters of this function such that the remote procedure can access the entire binary search tree stored by the client? Show your work for full credit.Explanation / Answer
Q-10
If multiple updates needed to perform some operations, crash can occur between them there is a storage problem induced Single logical file operation can involve updates to multiple physical disk blocks For write efficiency, want contiguous sequences of free blocks – Spread across all block groups – Updates leave dead blocks scattered
Multiple writes with various patterns aim at counterbalancing these two effects.
Modern hard disks achieve a very high data density. It makes sense that the higher data density is, the harder it becomes to recover traces of old overwritten data. It is plausible that recovering overwritten data is no longer possible with today's technology. At least, nobody is currently positively advertising such a service (but this does not mean that it cannot be done...).
Note that when a disk detects a damaged sector (checksum failure upon reading), the next write operation over that sector will be silently remapped to a spare sector. This means that the damaged sector (which has at least on wrong bit, but not necessarily more then one) will remain untouched forever after that event, and no amount of rewriting can change that (the disk electronic board itself will refuse to use that sector ever again). If you want to be sure to erase data, it is much better to never let it reach the disk in the first place: use full-disk encryption.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.