iSCSI Target and Initiator Configuration on RHEL 7

Configure iSCSI target via targetcli on RHEL 7.

Software

Software used in this article:

  1. RedHat Enterprise Linux 7.0
  2. targetcli 2.1.fb34
  3. iscsi-initiator-utils 6.2.0

Before We Begin

We have 3 VMs available, named ipa, srv1 and srv2. The ipa server, which we set up before, will be configured as an iSCSI target, and srv1 and srv2 will be iSCSI clients.

  1. iSCSI target provides remote block storage and is called server,
  2. iSCSI initiator uses this that storage and is called client.

iSCSI Target Installation

On the IPA server, that is going to act as an iSCSI target, create a volume group with a 100MB logical volume to use for iSCSI:

# vgcreate vg_san /dev/sdb
# lvcreate --name lv_block1 --size 100M vg_san

Install targetcli package and enable the target service to start on boot:

# yum install -y targetcli
# systemctl enable target

Configure firewalld to allow incoming iSCSI traffic on a TCP port 3260:

# firewall-cmd --add-port=3260/tcp --permanent
# firewall-cmd --reload

Configure iSCSI Target

Run targetcli to configure iSCSI target:

# targetcli

Our plan for configuring the target is as follows:

  1. backstore –> block,
  2. backstore –> fileio,
  3. iscsi (IQN name),
  4. iscsi –> tpg1 –> portals,
  5. iscsi –> tpg1 –> luns,
  6. iscsi –> tpg1 –> acls.

Create a couple of backstores, block and fileio, with a local file system cache disabled to reduce the risk of data loss:

/> backstores/block create block1 /dev/vg_san/lv_block1 write_back=false
/> backstores/fileio create file1 /root/file1.img size=100M sparse=true write_back=false

Create an IQN (Iscsi Qualified Name).

/> iscsi/ create iqn.2003-01.local.rhce.ipa:target
Created target iqn.2003-01.local.rhce.ipa:target.
Created TPG 1.

On RHEL 7.0 we need to create a portal, however, portal configuration is created automatically on RHEL 7.2.

/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/portals create 0.0.0.0 ip_port=3260

Create a lun for the fileio backstore:

/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/luns create /backstores/fileio/file1

Create two acls for our iSCSI clients (srv1 and srv2), but don’t add the previously mapped lun to the srv1 – the lun should only be available to the srv2:

/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls create iqn.1994-05.com.redhat:srv1 add_mapped_luns=false
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls create iqn.1994-05.com.redhat:srv2

Create a lun for the block backstore, this lun will be available for both servers:

/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/luns create /backstores/block/block1

Disable authentication (should be disabled by default anyway):

/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1 set attribute authentication=0

Optionally, set a userid and a password. Navigate to a certain acl of our target:

/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls/iqn.1994-05.com.redhat:srv1/ set auth userid=client password=client

Save the configuration and exit.

/> saveconfig

List the configuration:

/> ls
o- / ....................................................................................... [...]
  o- backstores ............................................................................ [...]
  | o- block ................................................................ [Storage Objects: 1]
  | | o- block1 .......................... [/dev/vg_san/lv_block1 (100.0MiB) write-thru activated]
  | o- fileio ............................................................... [Storage Objects: 1]
  | | o- file1 ................................. [/root/file1.img (100.0MiB) write-thru activated]
  | o- pscsi ................................................................ [Storage Objects: 0]
  | o- ramdisk .............................................................. [Storage Objects: 0]
  o- iscsi .......................................................................... [Targets: 1]
  | o- iqn.2003-01.local.rhce.ipa:target ............................................... [TPGs: 1]
  |   o- tpg1 ............................................................. [no-gen-acls, no-auth]
  |     o- acls ........................................................................ [ACLs: 2]
  |     | o- iqn.1994-05.com.redhat:srv1 ........................................ [Mapped LUNs: 1]
  |     | | o- mapped_lun0 .............................................. [lun1 block/block1 (rw)]
  |     | o- iqn.1994-05.com.redhat:srv2 ........................................ [Mapped LUNs: 2]
  |     |   o- mapped_lun0 .............................................. [lun0 fileio/file1 (rw)]
  |     |   o- mapped_lun1 .............................................. [lun1 block/block1 (rw)]
  |     o- luns ........................................................................ [LUNs: 2]
  |     | o- lun0 ............................................... [fileio/file1 (/root/file1.img)]
  |     | o- lun1 ......................................... [block/block1 (/dev/vg_san/lv_block1)]
  |     o- portals .................................................................. [Portals: 1]
  |       o- 0.0.0.0:3260 ................................................................... [OK]
  o- loopback ....................................................................... [Targets: 0]

Restart the target and check its status:

# systemctl restart target
# systemctl status target

Configure iSCSI Client (Initiator)

Configuration of an iSCSI initiator requires installation of the iscsi-initiator-utils package, which includes the iscsi and the iscsid services and the /etc/iscsi/iscsid.conf and /etc/iscsi/initiatorname.iscsi configuration files.

On the iSCSI clients srv1 and srv2, install the package:

# yum install -y iscsi-initiator-utils

Note well that on the iSCSI initiator both services are needed. The iscsid service is the main service that accesses all configuration files involved. The iscsi service is the service that establishes the iSCSI connections.

# systemctl enable iscsi iscsid

Our plan for configuring the client is as follows:

  1. Configure iSCSI initiatorname,
  2. Discover targets,
  3. Log into targets.

Open the file /etc/iscsi/initiatorname.iscsi for editing, and the initiator’s name iqn.1994-05.com.redhat:srv1.

If username and password were configured, put them into /etc/iscsi/iscsid.conf:

node.session.auth.authmethod = CHAP
node.session.auth.username = client
node.session.auth.password = client

Be advised that CHAP authentication does not use strong encryption for the passing of credentials. If security of iSCSI data is a requirement, controlling the network side of the protocol is a better method to assure it. For example, using an isolated vlans to pass the iSCSI traffic will be a better implementation from a security point of view.

Discover targets (the ipa server is on 10.8.8.70):

# iscsiadm -m discovery -t sendtargets -p 10.8.8.70:3260
10.8.8.70:3260,1 iqn.2003-01.local.rhce.ipa:target
# iscsiadm -m discovery -P1
SENDTARGETS:
DiscoveryAddress: 10.8.8.70,3260
Target: iqn.2003-01.local.rhce.ipa:target
        Portal: 10.8.8.70:3260,1
                Iface Name: default
iSNS:
No targets found.
STATIC:
No targets found.
FIRMWARE:
No targets found.

Log into the discovered target:

# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.8.8.70:3260 --login

Check the session:

# iscsiadm -m session -P3 | less

An iSCSI disk should be available at this point. Note that the server srv2 will see both an iSCSI block disk block1 as well as fileio file1, since both are mapped to the server. This will not be the case for the server srv1.

[srv1]# lsblk --scsi|grep LIO
sdb  3:0:0:0    disk LIO-ORG  block1   4.0  iscsi

Create a filesystem:

[srv1]# mkfs.ext4 -m0 /dev/sdb

Create a mount point and get UUID:

[srv1]# mkdir /mnt/block1
[srv1]# blkid | grep sdb
/dev/sdb: UUID="6a1c44d0-3e2f-49fc-85ba-ced3e44bb5b0" TYPE="ext4"

Add the following to /etc/fstab:

UUID=6a1c44d0-3e2f-49fc-85ba-ced3e44bb5b0 /mnt/block1 ext4 _netdev 0 0

Mount the iSCSI drive:

[srv1]# mount /mnt/block1

We can logout or delete the session this way:

# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.8.8.70:3260 --logout
# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.8.8.70:3260 -o delete

If things go wrong, we can stop the iscsi.service and remove all files under /var/lib/iscsi/nodes to clean up all current configuration. After doing that, we need to restart the iscsi.service and start the discovery and login again.

23 thoughts on “iSCSI Target and Initiator Configuration on RHEL 7

  1. # systemctl stop target
    # lvextend -L +200M -r /dev/vgsan/lvsan1

    #systemctl start target

    it keeps the old size even after rebooting the server
    remounting it on the initiator ot re-ligging after clearing /var/lib/iscsi/nodes makes the device lost at all (yes, the UUID was updated in /etc/fstab after the re-formatting)

  2. the client takes very long time to unmount the iscsi disks, is it normal? am I supposed to do something about it?
    I am thinking to add a script to stop the iscsi service before shutdown is it the right thing to do for the exam?

    • It takes a second for iSCSI disks to be unmounted in my test lab. Therefore I believe it should not take much time.

      In terms of scripting, I’d only write one if there is an exam task to do so.

  3. Hi Tomas,

    I ran this command, my screen appeared this error, should i remove “write_back=false” ?
    /> backstores/block create block1 /dev/vg_san/lv_block1 write_back=false
    Unexpected keyword parameter ‘write_back’.

    • Hi Tomas,

      I couldn’t log into discovered target on srv2. But on srv1, i logged into discovered target successful. When i tried to log into discovered target, i show error

      [root@sqllinux2 ~]# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.0.0.130:3260 –login
      Logging in to [iface: default, target: iqn.2003-01.local.rhce.ipa:target, portal: 10.0.0.130,3260] (multiple)
      iscsiadm: Could not login to [iface: default, target: iqn.2003-01.local.rhce.ipa:target, portal: 10.0.0.130,3260].
      iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure)
      iscsiadm: Could not log into all portals
      [root@sqllinux2 ~]# iscsiadm -m session -P3 | less
      iscsiadm: No active sessions.

      Can i log into discovered target on srv2 ?, if yes, show me the way

      Thanks and Regard !

    • Looks like an authentication issue to me. Ensure that the iSCSI target is configured to allow srv2 to log in, verify iSCSI initiator name, make sure it matches.

    • Check what options are available for block creation on the system that you use, if there is no write_back, then remove it.

    • Any chance you had a different initiator name on the srv2 that tried to access the target? It may be cached, try clearing it.

      You should be able to create it, yes, assuming the iSCSI target is configured and working properly.

    • Oh God, i made it work properly. My last question in this post is ” When i ran commands mkfs.ext4 -m0 /dev/sdb -> mkdir /mnt/block1 -> … -> mount /mnt/block1 in srv1, /mnt/block1 mounted that just can be seen in srv1, srv2 is not, why ?????. By definition, iSCSI is a shared storage ”

      Thank you so much for this topic
      Have a nice day !

    • This is problem :(

      [root@linux1 ~]# mkdir /mnt/block1
      [root@linux1 ~]# blkid | grep sdb
      /dev/sdb: UUID=”fa121996-eb2e-4f56-a0a1-91c97e4cef0f” TYPE=”ext4″
      [root@linux1 ~]# vi /etc/fstab
      [root@linux1 ~]# mount /mnt/block1
      [root@linux1 ~]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/cl-root 17G 1.8G 16G 11% /
      devtmpfs 902M 0 902M 0% /dev
      tmpfs 912M 0 912M 0% /dev/shm
      tmpfs 912M 8.7M 904M 1% /run
      tmpfs 912M 0 912M 0% /sys/fs/cgroup
      /dev/sda1 1014M 167M 848M 17% /boot
      tmpfs 183M 0 183M 0% /run/user/0
      /dev/sdb 35G 49M 35G 1% /mnt/block1
      [root@linux1 ~]# cat /proc/partitions
      major minor #blocks name

      8 0 20971520 sda
      8 1 1048576 sda1
      8 2 19921920 sda2
      11 0 1048575 sr0
      253 0 17821696 dm-0
      253 1 2097152 dm-1
      8 16 36700160 sdb

      [root@linux2 ~]# lsblk –scsi|grep LIO
      sdb 3:0:0:0 disk LIO-ORG file1 4.0 iscsi
      sdc 3:0:0:1 disk LIO-ORG block1 4.0 iscsi
      [root@linux2 ~]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/cl-root 17G 1.8G 16G 11% /
      devtmpfs 902M 0 902M 0% /dev
      tmpfs 912M 0 912M 0% /dev/shm
      tmpfs 912M 8.6M 904M 1% /run
      tmpfs 912M 0 912M 0% /sys/fs/cgroup
      /dev/sda1 1014M 167M 848M 17% /boot
      tmpfs 183M 0 183M 0% /run/user/0
      [root@linux2 ~]# cat /proc/partitions
      major minor #blocks name

      8 0 20971520 sda
      8 1 1048576 sda1
      8 2 19921920 sda2
      11 0 1048575 sr0
      253 0 17821696 dm-0
      253 1 2097152 dm-1
      8 16 36700160 sdb
      8 32 36700160 sdc

      Why mount point /dev/sdb can be seen from linux1, but can’t be seen in linux2 ?. I ran comman “cat /proc/partitions”, i still see sdb disk from both nodes

    • As per my previous reply, the backstore file1 is mapped to one server only. Go to the target, map the lun to both servers (srv1 and srv2), and you will be able to see /dev/sdb on both clients, and both clients will be able to mount it. I tested it, and all worked fine.

    • [root@linux1 block1]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/cl-root 17G 1.7G 16G 10% /
      devtmpfs 902M 0 902M 0% /dev
      tmpfs 912M 0 912M 0% /dev/shm
      tmpfs 912M 8.6M 904M 1% /run
      tmpfs 912M 0 912M 0% /sys/fs/cgroup
      /dev/sda1 1014M 184M 831M 19% /boot
      tmpfs 183M 0 183M 0% /run/user/0
      /dev/sdb 35G 49M 35G 1% /mnt/block1

      [root@linux2 block1]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/cl-root 17G 1.7G 16G 10% /
      devtmpfs 902M 0 902M 0% /dev
      tmpfs 912M 0 912M 0% /dev/shm
      tmpfs 912M 8.7M 904M 1% /run
      tmpfs 912M 0 912M 0% /sys/fs/cgroup
      /dev/sda1 1014M 184M 831M 19% /boot
      tmpfs 183M 0 183M 0% /run/user/0
      /dev/sdb 35G 326M 34G 1% /mnt/block1

      I can mount /dev/sdb, but size /dev/sdb on srv1 different than srv2, they’re not same. Why ???, i thought they use that /dev/sdb together ??, used size of /dev/sdb must be similar on both of nodes ???

    • They aren’t the same (well, they should be the same when you mount them initially) because you formatted /dev/sdb as an ext4. As soon as you make changes to the disk /dev/sdb on the server srv1, another server srv2 has no awareness of that and you will eventually lose data, if you make changes from the server srv2.

      Ext4 is not a shared-disk filesystem. Use GFS2 if you want to have both servers writing to the same iSCSI disk.

    • Hi Alex, this parameter creates a thin provisioned LUN. Thin provisioning is called “sparse volumes” sometimes.

      The idea here is to allocate blocks of data on-demand rather than allocating all the blocks in advance.

  4. Yesterday in RHCE exam i forgot to logout the iscsiadm and before checking by reboot my time was over and system automatically poweroff so iscsiadm login will create any problem or i will pass or fail ? Plz help me frnds i m very nervous

Leave a Reply

Your email address will not be published. Required fields are marked *