Chapter 13: Problem 11
What are the techniques used to improve performance of disks in RAID?
Short Answer
Step by step solution
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 13: Problem 11
What are the techniques used to improve performance of disks in RAID?
These are the key concepts you need to understand to accurately answer the question.
All the tools & learning materials you need for study success - in one app.
Get started for free
Can you think of techniques other than chaining to handle bucket overflow in external hashing?
Discuss the advantages and disadvantages of using (a) an unordered file, (b) an ordered file, and (c) a static hash file with buckets and chaining. Which operations can be performed efficiently on each of these organizations, and which operations are expensive?
Why are disks, not tapes, used to store online database files?
Suppose that a file initially contains \(r=120,000\) records of \(R=200\) bytes each in an unsorted (heap) file. The block size \(B=2400\) bytes, the average seek time \(s=\) \(16 \mathrm{ms}\), the average rotational latency \(r d=8.3 \mathrm{ms}\) and the block transfer time \(b t t=\) \(0.8 \mathrm{ms} .\) Assume that 1 record is deleted for every 2 records added until the total number of active records is 240,000 a. How many block transfers are needed to reorganize the file? b. How long does it take to find a record right before reorganization? c. How long does it take to find a record right after reorganization?
Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that \(B=2400\) bytes, \(s=16 \mathrm{ms}, r d=8.3 \mathrm{ms},\) and \(b t t=0.8 \mathrm{ms}\) Suppose we want to make \(X\) independent random record reads from the file. We could make \(X\) random block reads or we could perform one exhaustive read of the entire file looking for those \(X\) records. The question is to decide when it would be more efficient to perform one exhaustive read of the entire file than to perform \(x\) individual random reads. That is, what is the value for \(X\) when an exhaustive read of the file is more efficient than random \(X\) reads? Develop this as a function of \(X\)
What do you think about this solution?
We value your feedback to improve our textbook solutions.