CREATING POOLS.......

This is how i'm gonna implement pools......

* Every disk that is present in the system is controlled by a driver.
* every driver has a request queue.

I will create a virtual device - POOL. Pool's size = sum of sizes all the devices tat forms the pool.
any request that comes to my Pool's request queue will be forwarded to a corresponding physical device
-------------------------
| DEV-A | DEV-B | DEV-C | > dev_name
| 0-100 | 0-100 | 0-100 | > size in sectors
-------------------------
........................||...........................
........................||...........................
.......................\_/........................
........................\/.........................
---------------------------
| .........POOL ..........|
| ......... 300 ..........|
---------------------------


so, this pool is only visible to the user.
Any I/O request to the POOL should be converted to I/O request of the device.
Example : any request to read the 125'th sector of the pool, should be a request to read the DEV-B's 25th sector.

please see,
http://lwn.net/Articles/58720/

In this device driver,
static void sbd_transfer(); does the copying job from buffer to disk....

I need to modify this sbd_transfer(), such that,
{
if(sector>= 0 && sector <100)  class="Apple-tab-span" style="white-space:pre"> i/o request should be sent to DEV-A's driver...


if(sector>= 100 && sector <200) class="Apple-tab-span" style="white-space:pre"> i/o request should be sent to DEV-B's driver...


if(sector>= 200 && sector <300) class="Apple-tab-span" style="white-space:pre"> i/o request should be sent to DEV-C's driver...

}
This is the algorithm

Questions.

* Is this possible ?
* If this is possible, how can we enable communication between Drivers?
[pool's driver need to pass a read/write request to DEV-B.. HOW?]

Posted in |

1 comments:

  1. Unknown Says:

    Your approach is right, you would be creating one layer under a physical disk.
    In practical that means, given a physical disk /dev/sdc of size 50GB. you would be creating 50 sliced volume of 1GB each /dev/vol.. vol1 or DEV-A.
    Then out of these slices you will crate one disk which could be present to use/Filesystem at the time of MKFS.
    Assume you create volume /dev/volpool from 10 sliced volume(vol0 to vol9)

    For each, vol device you should have one gendisk and device structure, and for /dev/volpool as well, yes from OS, IO will come to you in request quest i.e.sbd_request.
    your 1st call is volpool_reuest(). from to real disk or volume by calculating.
    There is lots of scope of optimization, but 1st focus on sanity.


    Home work: (1)create a RAM disk and create volpool disk, do one to one mapping. i.e RAM disk is 100MB and /dev/volpool is 100MB. capture all the call from the volpool and pass the same to RAM disk driver.
    (2)create a physical disk in VMware disk and create volpool disk, do one to one mapping.
    (3)create a RAM 100 MB disk and create slice volume of 10MB each, then create volpool disk out slice volume,

    In actual implementation will not create gendisk for each /dev/vol or DEV-A we will have our own data structure for sliced volume, which will not be visible in name space.