0 Min. Reading duration

Real Time Measurements for Substation on Welotec RSAPC Mk2 with SEAPATH CentOS Stream 9 Cluster powered by Red Hat

Martin Kohn

Published on 16 Oct, 2024

Real Time Measurements for Substation on Welotec RSAPC Mk2 with SEAPATH CentOS Stream 9 Cluster powered by Red Hat

The Shift to Software-Defined Substations: Navigating Challenges in Hardware and Software Synergy

As the energy sector reaches a critical juncture with the integration of distributed renewable resources, the shift to software-defined substations presents not only new opportunities, but also challenges both to general purpose software as well as modern computer hardware.

Being a key to this transition, the synergy between hardware and software must ensure that substations not only continue to effectively provide the protection function adhering to standards and local regulations but also additionally provide more flexibility, interoperability and scalability of operations on par with modern IT.

Probably one of the major obstacles to leverage general purpose software and hardware in substations is to ensure that it can provide sufficient responsiveness to protection applications running on top of it, so that in case of disruption in power grid, corrective actions can be performed on time. Often this responsiveness is determined as latency and can be measured in multiple ways depending on the particular application type.

Developing a Low-Latency Virtualization Setup for Protection Relays in Digital Substations

This article describes our efforts to develop a reproducible hardware and software setup capable of delivering latency at a level suitable for virtualizing protection relay functions in a digital substation.

For that purpose we focus on conducting latency measurements on CentOS Stream 9 hosts provisioned and configured using the SEAPATH project based on recently added CentOS support. Although SEAPATH also offers a broad spectrum of utilities to deploy workloads on a configured cluster, for now we only focus on manual deployments of the test environment.

Optimizing Substation-Grade Hardware: Welotec RSAPC Mk2 in Action

For our hardware setup, we used three Rugged Substation Automation Computer Mk2 by Welotec. These are rugged fanless 19-inch 2U rackmount industrial PCs specifically designed for typical substation environments. They comply with IEC-61850-3 and IEEE1613 standards and not only can operate in a wide temperature range (from -40°C to +70°C) but also provide protection against electrostatic discharge (EDS) and can withstand high electromagnetic fields and RF interference.

We used the top of the range models of Mk2 servers based on Intel Xeon W-11865MLE processor, with 8 cores (and up to 16 threads) and 64GB RAM (up to 128GB available) to be able to deploy multiple containerized and virtualized workloads simultaneously.

Each of the systems came equipped with 8 onboard ethernet ports some of which we used for latency measurement.

For all hosts we relied on the 1.19 RSAPC Mk2 version of the BIOS for Rugged Substation Automation Computer Mk2, which came pre-configured with hyper-threading setting off as a part of Intel TCC option. No special changes were applied to BIOS configuration. All three servers were provisioned with CentOS Stream 9 RT operating systems with enabled RT virtualization using an installation medium from a kickstart file provided here.

After the provisioning and cluster configuration were completed, we evaluated the default settings of the CentOS Stream 9 and, based on our previous experience, tuned them to achieve the best performance regarding latency.

First we allowed applications to be executed on all cores (since we configured the virtual machines to use sched_setaffinity). SEAPATH configures this as default to CPUs 1-7 (forcing the user to create cgroups).


systemctl set-property user.slice  AllowedCPUs=0-7 
systemctl set-property machine.slice  AllowedCPUs=0-7 
systemctl set-property system.slice  AllowedCPUs=0-7
Furthermore we set 1GB hugepages as default, enabled the host to use IOMMU and applied the realtime-virtual-host tuned profile

grubby --args "default_hugepagesz=1G" --update-kernel ALL
grubby --args "intel_iommu=on iommu=pt" --update-kernel ALL

#Edit /etc/tuned/realtime-virtual-host-variables.conf and set the following #variables
#with the respective values:
#    1) "isolated_cores=1-7" (CPU 0 will be housekeeping CPU).
#    2) isolate_managed_irq=Y
#    3) netdev_queue_count=4

tuned-adm profile realtime-virtual-host
With this change in place we're able to ensure that 16GB of hugepages are mapped with 1GB pages early during boot by applying the following changes and rebooted the system

echo "echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages" >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
reboot

After the reboot we assigned one of onboard NICs to vfio-pci driver (the exact command actually depends on your pcie topology and type of NIC you are using) and as the last step of host configuration we set up L3 cache reservation using Intel CAT.


#setup Cache-Way masks for COS 0-1
wrmsr 0xc90 0xf00 # 1111 0000 0000
wrmsr 0xc91 0x0ff # 0000 1111 1111

# cores 0 and 1 remain in default COS 0
# core 0: host hk CPU
# core 1: guest hk vCPU
wrmsr -p 0 0xc8f 0x000000000
wrmsr -p 1 0xc8f 0x000000000

# set core 2-7 to use COS 1
wrmsr -p 2 0xc8f 0x100000000
wrmsr -p 3 0xc8f 0x100000000
wrmsr -p 4 0xc8f 0x100000000
wrmsr -p 5 0xc8f 0x100000000
wrmsr -p 6 0xc8f 0x100000000
wrmsr -p 7 0xc8f 0x100000000
Then we downloaded a CentOS Stream 9 RT image and used virt-install to install a guest

virt-install   -n RHEL9-RT --os-variant=rhel9.0    --memory=10240,hugepages=yes   --memorybacking hugepages=yes,size=1,unit=G,locked=yes  --vcpus=3 --numatune=0   --disk path=./rhel9-rt.img,bus=virtio,cache=none,format=qcow2,io=threads,size=20   --graphics none --console pty,target_type=serial  -l CentOS-Stream-9-latest-x86_64-dvd1.iso --extra-args 'console=ttyS0,115200n8 serial'
With our guest installed we patched guest XML to as follows:

--- rhel9rt-vanilla.xml 2024-09-12 06:57:02.953731331 -0400
+++ rhel9rt-vanilla-rt.xml      2024-09-12 07:13:00.013727680 -0400
@@ -15,6 +15,17 @@

     <locked/>
   </memoryBacking>
   <vcpu placement='static'>3</vcpu>
+  <cputune>
+    <vcpupin vcpu='0' cpuset='1'/>
+    <vcpupin vcpu='1' cpuset='2'/>
+    <vcpupin vcpu='2' cpuset='3'/>
+    <vcpupin vcpu='3' cpuset='4'/>
+    <emulatorpin cpuset='0'/>
+    <vcpusched vcpus='0' scheduler='fifo' priority='1'/>
+    <vcpusched vcpus='1' scheduler='fifo' priority='1'/>
+    <vcpusched vcpus='2' scheduler='fifo' priority='1'/>
+    <vcpusched vcpus='3' scheduler='fifo' priority='1'/>
+  </cputune>
   <numatune>
     <memory mode='strict' nodeset='0'/>
   </numatune>
@@ -28,8 +39,14 @@
   <features>
     <acpi/>
     <apic/>
+    <pmu state='off'/>
+    <vmport state='off'/>
   </features>
-  <cpu mode='host-passthrough' check='none' migratable='on'/>
+  <cpu mode='custom' match='exact' check='partial'>
+    <model fallback='allow'>	Cascadelake-Server-noTSX</model>
+    <vendor>	Intel</vendor>
+    <feature policy='require' name='tsc-deadline'/>
+  </cpu>
   <clock offset='utc'>
     <timer name='rtc' tickpolicy='catchup'/>
     <timer name='pit' tickpolicy='delay'/>
@@ -52,17 +69,6 @@
       <alias name='virtio-disk0'/>
       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
     </disk>
-    <disk type='file' device='cdrom'>
-      <driver name='qemu'/>
-      <target dev='sda' bus='sata'/>
-      <readonly/>
-      <alias name='sata0-0-0'/>
-      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
-    </disk>
-    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
-      <alias name='usb'/>
-      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
-    </controller>
     <controller type='pci' index='0' model='pcie-root'>
       <alias name='pcie.0'/>
     </controller>
@@ -178,12 +184,6 @@
       <target type='serial' port='0'/>
       <alias name='serial0'/>
     </console>
-    <channel type='unix'>
-      <source mode='bind' path='/run/libvirt/qemu/channel/6-RHEL9-RT/org.qemu.guest_agent.0'/>
-      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
-      <alias name='channel0'/>
-      <address type='virtio-serial' controller='0' bus='0' port='1'/>
-    </channel>
     <input type='mouse' bus='ps2'>
       <alias name='input0'/>
     </input>
@@ -191,18 +191,14 @@
       <alias name='input1'/>	
     </input>
     <audio id='1' type='none'/>
-    <watchdog model='itco' action='reset'>
-      <alias name='watchdog0'/>
-    </watchdog>
-    <memballoon model='virtio'>
-      <alias name='balloon0'/>
-      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
-    </memballoon>
-    <rng model='virtio'>
-      <backend model='random'>	/dev/urandom</backend>
-      <alias name='rng0'/>
-      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
-    </rng>
+    <watchdog model='itco' action='none'/>
+    <memballoon model='none'>
+    <hostdev mode='subsystem' type='pci' managed='yes'>
+      <source>
+        <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
+      </source>
+      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
+     </hostdev>
   </devices>
   <seclabel type='dynamic' model='selinux' relabel='yes'>
     <label>system_u:system_r:svirt_t:s0:c325,c379</label>

Testing Environment: Benchmarking Latency in Virtualized Substation Setups

In order to make results of the latency measurement maximally representative for the digital substation use-case, we aligned the test environment architecture with a proposal which is currently in works by the vPAC Alliance. We used a l2reflect application proposed to the DPDK framework to benchmark maximal l2 roundtrip latency of the tuned system, which has an advantage not only to measure OS latency (like e.g. cyclictest) but also to include latency originating from the networking stack and ethernet itself.

Figure 1. Test bench setup with virtualized test application
Figure 1. Test bench setup with virtualized test application

To measure latency, in a virtualized environment, we manually deployed real time capable VMs with l2reflect application on two previously pre-configured instances . Figure 1 depicts the deployment in detail. We isolated CPUs 1-7, leaving CPU0 for the housekeeping tasks of the host operating system.

In addition an instance of stress-ng was deployed on the CPU0 to simulate further best effort applications running on the host. CPUs 1-4 were then assigned to the VM with corresponding pinning to vCPU0-3, in addition we passed through one of the onboard NICs to the VM. In the VM we dedicated vCPU0 (CPU1 on the host) to the housekeeping tasks of the guest operating system and again deployed a stress-ng instance to simulate best effort applications running in the VM.

We then assigned a l2reflect instance to vCPU1 and vCPU2 and used SCHED_FIFO policy for it. On the remaining isolated vCPU4 we deployed a cyclic test instance to simulate another latency sensitive application which doesn’t utilize I/O capabilities in order to collect additional information about the system performance.

Achieving Consistent Low Latency in Substations: Results from a 7-Day Continuous Test

Testing was continuously executed for a 7-day interval to ensure sufficient statistics of measured latency and overall stability of the system during long runs. As a result, the latency reported by the l2reflect test hasn’t exceeded 82µs (averaging at 44µs) for the virtualized deployment.

Although there is no commonly accepted maximal latency value for protection relay workloads in substations, it is often considered that values reported by l2reflect test must not exceed 250µs. It is worth mentioning that this threshold value isn’t backed by any specific theoretical model but is based on experience of different players in the substation automation field.

The obtained values provide strong evidence that responsiveness sufficient for a typical protection relay application can be achieved using KVM-RT virtualization and a general purpose operating system like CentOS Stream 9 when deployed on industrially graded hardware like Welotec RSAPC Mk2.

Moreover, having our tests running for seven consecutive days not only demonstrates that a maximal latency much lower than 250µs is achievable but also shows that this value can be sustained over an extended period of time.

Authors: Marcelo Tosatti (RedHat), Daniel Knüppe (Welotec), Martin Kohn (Welotec), Alexander Lougovski (RedHat)

Expert

Martin Kohn

Product Owner at Welotec GmbH

Martin Kohn is a Product Owner for Software Solutions at Welotec GmbH, specializing in SCRUM-based project and product management. 

With a background in software development, particularly in Yocto/Embedded Linux and C++, he drives innovation in IIoT and Substation Solutions. Martin holds a Diplom in Physics. His career includes roles in software engineering and academic research at University of Münster, where he worked on advanced particle detectors.

Related Products