3
- According to the benchmarks on the net, iSCSI delivers performance on the
4
pair with NFS. Few my tests reported about 350 MB/s on the storage capable
5
of 3.5 GB/s, though I have not did any tunning. So, we are only interested
6
in iSER solutions for high performance systems.
9
Target: the server part of protocol
10
Initiator: the iSCSI client
12
- According to the standard multiple initiators may share a single target.
13
This is supported by most of the stacks. However, synchronization issues
14
are up to the user. So, it is only possible to use with some higher level
15
clustering filesystem taking care for locking (for example GFS2).
18
ietd - no iSER support (on 28.11.2012)
19
scst - re-implementation of ietd (still no iSER according to documentation)
20
stgt - iSER support, but known being not that fast.
21
lio/tcm - iSER support is expected in kernel 3.9
24
openiscsi - iSER is supported
28
- OpenSuSE does not include support for iSER. The package should be recompiled
29
adding ISCSI_RDMA=1 to make command
30
- IPV6 can be a problem (unless configured and used, I guess). The tgtd will
31
complain about missing/misbehaving iser driver. Removing ipv6 support fixes
32
the issue (YaST/Network Devices/General/Enable IPV6)
33
- mdadm may detect the softraid on the partitions to forward and initialize
34
partial mdarray. Then, tgt will see what partition is already used and will
35
refuse to share it. Restrict the devices used for software raid in /etc/mdadm.conf
36
helps to resolve issue:
39
List current configuration:
40
tgtadm --lld iscsi --op show --mode target
41
When properly configured you expected to see:
42
Target 1: iqn.ipepdvcompute2.ssd3500
44
LUN: 0 (this is the virtual adapter LUN)
45
LUN: 1 (this is a first published disk LUN)
47
Configuring (create interface, allow access, and create first disk):
48
tgtadm --lld iser --mode target --op new --tid 1 --targetname "iqn.$(hostname).ssd3500"
49
tgtadm --lld iser --mode target --op bind --tid 1 --initiator-address ALL
50
tgtadm --lld iser --mode logicalunit --op new --tid 1 --lun 1 --backing-store /dev/disk/by-uuid/6eeffa6d-d61e-4157-8732-e1da39368325 --bstype aio
53
tgt-admin --dump > /etc/tgt/targets.conf
57
- Most recent and actively developed solution. Selected for inclusion into the kernel.
58
rDMA support is schedulled for integration in kernel 3.9.
59
- I checked their git tree, there is no iSER as of 11.2012
63
iscsi_discovery 192.168.11.5 -t iser -f -l
64
-f - disable fallback to tcp
65
-l - immidiatelly login and get all the devices [better check first]
68
iscsiadm -m node -U all
72
- There is multiple options affecting the performance of the system.
73
- LUN types: direct-store (RAW devices) or backing-store (files, etc.)
74
- Backend (bs-type): rdwr (cached file access), aio (Kernel AIO), sg (Direct Access)
75
- Write cache (write-cache): disabling is good idea for high speed streaming
76
- Block Size (block-size): 512 - 4096 (target supports bigger sizes, but initiator not).
77
With not standard size (not 512), the O_DIRECT access on the client is failing with
79
- Packet sizes (MaxRecvDataSegmentLength, MaxXmitDataSegmentLength, FirstBurstLength,
80
MaxBurstLength) and on the client (node.conn[0].iscsi.MaxRecvDataSegmentLength,
81
node.session.iscsi.FirstBurstLength, node.session.iscsi.MaxBurstLength).
82
If segment length are set bellow, the raid blocks size (strip * n_disk), the there is
83
problems connecting to the target in the aio mode.
84
- For high performance streaming we need: direct-store, aio backend (sg is not working
85
for me, but aio gives really good speed), and big buffers
86
- On the target side, the read ahead should be set to the strip size (in 512 blocks). It
87
is also affects writting speed.
88
blockdev --setra 65536 /dev/sdc
90
/etc/tgt/targets.conf:
93
<target iqn.ipepdvcompute2.ssd3500>
94
<direct-store /dev/disk/by-uuid/6eeffa6d-d61e-4157-8732-e1da39368325>
96
MaxRecvDataSegmentLength 2097152
97
MaxXmitDataSegmentLength 2097152
98
FirstBurstLength 8388608
99
MaxBurstLength 8388608