/docs/MyDocs

To get this branch, use:
bzr branch http://darksoft.org/webbzr/docs/MyDocs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
iSCSI / iSER
------------
 - According to the benchmarks on the net, iSCSI delivers performance on the
 pair with NFS. Few my tests reported about 350 MB/s on the storage capable
 of 3.5 GB/s, though I have not did any tunning. So, we are only interested
 in iSER solutions for high performance systems.
 
 - Terminology
    Target: the server part of protocol
    Initiator: the iSCSI client
 
 - According to the standard multiple initiators may share a single target.
 This is supported by most of the stacks. However, synchronization issues
 are up to the user. So, it is only possible to use with some higher level
 clustering filesystem taking care for locking (for example GFS2).

 Server stacks:
  ietd - no iSER support (on 28.11.2012)
  scst - re-implementation of ietd (still no iSER according to documentation)
  stgt - iSER support, but known being not that fast.
  lio/tcm - iSER support is expected in kernel 3.9
 
 Client stacks:
  openiscsi     - iSER is supported
 
  stgt
  ====
    - OpenSuSE does not include support for iSER. The package should be recompiled
    adding  ISCSI_RDMA=1 to make command
    - IPV6 can be a problem (unless configured and used, I guess). The tgtd will 
    complain about missing/misbehaving iser driver. Removing ipv6 support fixes
    the issue (YaST/Network Devices/General/Enable IPV6)
    - mdadm may detect the softraid on the partitions to forward and initialize
    partial mdarray. Then, tgt will see what partition is already used and will
    refuse to share it. Restrict the devices used for software raid in /etc/mdadm.conf
    helps to resolve issue:
	DEVICE /dev/sd[ab]*
    
    List current configuration:
        tgtadm --lld iscsi --op show --mode target
    When properly configured you expected to see:
        Target 1: iqn.ipepdvcompute2.ssd3500
            Driver: iser
        LUN: 0 (this is the virtual adapter LUN)
        LUN: 1 (this is a first published disk LUN)

    Configuring (create interface, allow access, and create first disk):
        tgtadm --lld iser --mode target --op new --tid 1 --targetname "iqn.$(hostname).ssd3500"
        tgtadm --lld iser --mode target --op bind --tid 1 --initiator-address ALL
        tgtadm --lld iser --mode logicalunit --op new --tid 1 --lun 1 --backing-store /dev/disk/by-uuid/6eeffa6d-d61e-4157-8732-e1da39368325 --bstype aio

    Store current setup
         tgt-admin --dump > /etc/tgt/targets.conf

  tcm/lio
  =======
    - Most recent and actively developed solution. Selected for inclusion into the kernel.
    rDMA support is schedulled for integration in kernel 3.9.
    - I checked their git tree, there is no iSER as of 11.2012

  openiscsi
  =========
    iscsi_discovery 192.168.11.5 -t iser -f -l
	-f 	- disable fallback to tcp
	-l 	- immidiatelly login and get all the devices [better check first]
    
    logout from all nodes
	    iscsiadm -m node -U all

  Performance
  ===========
    - There is multiple options affecting the performance of the system. 
    - LUN types: direct-store (RAW devices) or backing-store (files, etc.)
    - Backend (bs-type): rdwr (cached file access), aio (Kernel AIO), sg (Direct Access)
    - Write cache (write-cache): disabling is good idea for high speed streaming
    - Block Size (block-size): 512 - 4096 (target supports bigger sizes, but initiator not).
    With not standard size (not 512), the O_DIRECT access on the client is failing with
    errno 22.
    - Packet sizes (MaxRecvDataSegmentLength, MaxXmitDataSegmentLength, FirstBurstLength,
    MaxBurstLength) and on the client (node.conn[0].iscsi.MaxRecvDataSegmentLength, 
    node.session.iscsi.FirstBurstLength, node.session.iscsi.MaxBurstLength).
    If segment length are set bellow, the raid blocks size (strip * n_disk), the there is
    problems connecting to the target in the aio mode.
    - For high performance streaming we need: direct-store, aio backend (sg is not working
    for me, but aio gives really good speed), and big buffers
    - On the target side, the read ahead should be set to the strip size (in 512 blocks). It
    is also affects writting speed.
        blockdev --setra 65536 /dev/sdc

/etc/tgt/targets.conf:
    default-driver iser

    <target iqn.ipepdvcompute2.ssd3500>
    <direct-store /dev/disk/by-uuid/6eeffa6d-d61e-4157-8732-e1da39368325>
        bs-type aio
        MaxRecvDataSegmentLength 2097152
        MaxXmitDataSegmentLength 2097152
        FirstBurstLength 8388608
        MaxBurstLength 8388608
        block-size 512
        write-cache off
    </direct-strore>
</target>