|Home | About | Journals | Submit | Contact Us | Français|
Existing transmission power grids suffer from high maintenance costs and scalability issues along with a lack of effective and secure system monitoring. To address these problems, we propose to use Wireless Sensor Networks (WSNs)as a technology to achieve energy efficient, reliable, and low-cost remote monitoring of transmission grids. With WSNs, smart grid enables both utilities and customers to monitor, predict and manage energy usage effectively and react to possible power grid disturbances in a timely manner. However, the increased application of WSNs also introduces new security challenges, especially related to privacy, connectivity, and security management, repeatedly causing unpredicted expenditures. Monitoring the status of the power system, a large amount of sensors generates massive amount of sensitive data. In order to build an effective Wireless Sensor Networks (WSNs) for a smart grid, we focus on designing a methodology of efficient and secure delivery of the data measured on transmission lines. We perform a set of simulations, in which we examine different routing algorithms, security mechanisms and WSN deployments in order to select the parameters that will not affect the delivery time but fulfill their role and ensure security at the same time. Furthermore, we analyze the optimal placement of direct wireless links, aiming at minimizing time delays, balancing network performance and decreasing deployment costs.
A smart grid is an initiative to promote and transform the traditional power systems to modern and automated power grids. An electrical power system is a set of many elements such as transmission lines, transformers and generators connected together into a larger system, that can generate, transmit and distribute electric power. Different kinds of electrical elements imply a large variety of dynamic actions or responses to disturbances. Some power system disruptions can affect single element, others can affect larger fragments. Some failures can spread and affect the system as a whole. As each dynamic effect reflects certain unique feature of power system dynamics, some of them can be grouped according to their cause, consequence, time frame, physical character or the place in the system that they occur. Based on the physical character of the disturbance, different power system dynamics can be divided into four groups, defined as: wave, electromagnetic, electromechanical and thermodynamic . This classification also corresponds to the time frame involved (Figure 1 and Table 1). The fastest transients are related to the wave effects or surges in high voltage transmission lines and correspond to the propagation of electromagnetic waves caused by lightning strikes or switching operations. The time frame of these dynamics ranges from microseconds (s) to microseconds (ms).
Much slower are the electromagnetic dynamics, that take place in the machine windings following a disturbance, such as a short-circuit, operation of the protection devices like the distance or overcurrent protection as well as the interaction between the electrical machines and the power system. Their time frame ranges from ms to seconds (s).
Due to the oscillation of the rotating masses of the generators and motors that occurs usually after disturbance like a short-circuit or switching off large amount of generation, the electromechanical dynamics are even much slower. Electromechanical transients are also caused by operation of the protection system such as underfrequency or undervoltage protection. The time frame of these dynamics is from around a second to several seconds.
The slowest dynamics are the thermodynamic transients, related to the active power generation control in power plants (boiler-turbine-generator control) or transmission line wires temperature variations due to varying weather conditions and line current flow. (The line current flow is a consequence of weather conditions or disturbances like long-term line overloading because of high ambient temperatures, short-term line overloading due to power swings in the system or very short-term high overcurrent due to short-circuit.)
A careful inspection of Figure 1 shows the classification of power system dynamics with respect to the time frame. As it is evident from Figure 1 , they are closely related to where the dynamics occur within the system. For instance, moving from left to right along the time scale in Figure 1 corresponds to moving through the power system from the electrical circuits (that contains elements such as resistance (or resistor), inductance (inductive coil or solenoid) and capacitance (or capacitor)) of the transmission network, through the generator armature windings to the field and damper winding, then along the generator rotor to the turbine until finally the thermodynamics of boiler-turbine-generator slow transients are reached .
In today’s electric power systems, sensing and monitoring equipment is employed in a limited number of critical assets, such as power transmission lines and substations. Generally, this sensing equipment is not interconnected, and transmits the collected data to a central location using wire communications (such as Ethernet or fiber-optic), where a supervisory control system and dispatchers at the utility headquarters manage grid operations. Wired monitoring systems require expensive communication cables to be installed and regularly maintained. Additionally, the use of wire communication in harsh environments is very often uneconomical or even impossible. Hence, there is an urgent need for cost-effective wireless monitoring and diagnostic systems that can improve system reliability and efficiency by optimizing the management of electric power systems. Recent studies showed [2,3], that the philosophy of WSNs has a great potential to address the challenges of the existing power grids and fulfill their sensing and communication tasks. The need for uninterrupted electricity supply and the high costs of building new generating and transmitting power facilities are forcing governments and operators to make an efficient use of network infrastructure and increase the regulatory abilities of power grids using new kinds of measurements. Certain and uninterrupted supply of electrical energy became the foundation for the creation of a new Information and Communication Technologies (ICTs)  (such as WSN measurement systems). The ability to harvest new information about transmission line parameters implied a rapid development of new algorithms for power systems operation. One way of more efficient usage of grid infrastructure (for instance, overhead transmission lines and increasing its ampacity and transmission flexibility) is the monitoring of weather conditions, mechanical and electrical parameters along lines in order to determine the Dynamic Line Rating (DLR) and allow for controlled overloading during certain time. Additional measurements of the plurality of non-electrical quantities such as temperature, wire strain elongation or sag with the current weather conditions pose an additional information about safe or unsafe operation of transmission lines. Above considerations lead to the conclusion that WSN can provide an important information, which can be used for power system estimation or optimization along with providing an additional decision variable for dispatcher to turn off the line or not.
Since monitoring and control of electric power systems are essential for their efficient and effective functioning, WSNs, as a low-cost, flexible and self-organizing devices, appear to be the perfect choice for creating a highly reliable and self-healing smart power grid that rapidly responds to online events with appropriate actions. The nature of WSNs brings significant advantages over traditional communication technologies. WSNs are characterized by a rapid and straightforward deployment (even in difficult-to-access and large geographical locations), high coverage area, low installation and maintenance costs, as well as easy replacement and upgrading procedures. Furthermore, they have the capability to organize and configure themselves into effective communication networks. Their flexibility and mobility is higher than the one of the wired networks. Potential applications of WSNs on smart grid span a wide range, including, but not limited to: power quality monitoring, outage detection, overhead transmission line monitoring (such as conductor elongation or temperature measurements in dynamic load rating systems), fault detection and location, equipment fault diagnostics and underground cable system monitoring . The collaborative and context-aware nature of WSN brings several advantages over traditional sensing including greater fault tolerance, improved accuracy, larger coverage area, and extraction of localized features. Despite undeniable advantages, there are also several challenges, such as security. As is the case with all kinds of wireless networks, WSNs are more vulnerable to security threats originating from the open communication environment, which pose serious risks and expose the system to many threats. Unlike other wireless technologies, applying advanced and complex security mechanisms to WSN is not applicable due to their physical limitations. This complicates protection measures and makes WSNs more vulnerable to external attacks. Moreover, WSN-based smart grids suffer from all the security threats facing classical WSN communication. Because of their complexity, large number of stakeholders, and highly time-sensitive operational requirements, power grids have additional vulnerabilities and are prone to varied types of attacks [6,7,8]. One should also remember, that available security solutions, even those designed for WSNs, are also very often costly and consume large amounts of the Central Processing Unit (CPU). Another drawback of using WSNs on long distances, are time delays. The delays in information transmission depend, among others, on the possibility of practical use of the structure of WSN to/for the DLR or to support the work of securing or blocking its action. Remaining delays are related to utilized routing algorithms and depend on how far the data needs to travel through the transmission grid and the time when the collected data is finally sent to the base station. For environments with rigorously defined requirements on data delivery, it is essential to minimize the information transmission delay in order to allow for efficient and effective communication. Hence, in this paper, we focus on minimizing delay time considering the reliability of the communication simultaneously. To overcome the above issues, we proposed the secure and reliable communication plan for WSN in transmission grid systems.
In this paper, we contribute to analyzing and solving the problem of time delays generated during measurement data transmission. The main contributions of this paper are summarized as follows.
The remainder of this paper is organized as follows: in Section 2, we present the related work. Moving on to Section 3, we discuss our environment, describing considered architectures and scenarios. Section 4 is focused on practical representation of our approach. Next, in Section 5, we analyze the results and investigate the relationship between time delays and different architectures. Finally, we conclude with Section 6, in which we summarize our work.
The smart grid can provide efficient, reliable, and safe energy automation service with two-way communication and electricity flows. The use of WSN helps in capturing and analyzing data related to power usage, delivery, and generation. The energy usage and management information, including the energy usage frequency, phase angle and the values of voltage, can be read in real time from remote devices. Therefore, utility companies can manage electricity demand efficiently. They can reduce operational costs by eliminating the need for human readers and provide an automatic pricing system for customers, who can enjoy highly reliable, flexible, readily accessible and cost-effective energy services. However, in order to achieve a trade-off between the performance and potential time delays, the analyzed architecture should be carefully examined.
Given the vast geographical expanse of the transmission grid architecture, delivering monitoring information to the control center in a cost efficient and timely manner is a critical challenge to be addressed in order to build an intelligent smart grid. Since WSNs present a feasible and effective solution for transmission of monitoring data, there exist many works dealing with the possibility of adopting WSN for the purposes of monitoring the operational parameters of power lines [9,10,11,12]. Although optimization of WSN architectures monitoring transmission lines has been examined many times before [2,3,13,14], there was no exact analysis concerning security of provided solutions.
In  the authors propose a hybrid (combined of wired (copper cable/optical fiber and wireless (cellular/IEEE ) standards) hierarchical network to provide cost optimized delay and bandwidth constrained data transmission. They formulate a placement problem in order to optimize the number and location of the cellular enabled towers (base stations) to significantly reduce the operational and installation costs while respecting effective data transmission, bandwidth, delay and connectivity constraints. Their goal was to minimize the installation and operational cost while satisfying the latency and bandwidth constraints of the data flows. Researchers evaluate proposed solution (using the ILOG CPLEX software for the simulation), considering an example transmission line network with defined number of towers, average span, data packet length and bandwidth. They examine different scenarios, including variation in flow bandwidth, network size and compare obtained results. Although scientists claim that their method is generic and encompasses variation in several factors, they consider neither the security of the transmission, nor the routing algorithms responsible for traversing the data to the base station.
Another paper, which deals with a problem of efficient communication of sensors monitoring transmission power lines, is presented in . Here, the researchers propose a network model, in which wireless sensors, deployed all over transmission line, communicate with each other in order to (if needed) reconfigure (based on the application requirements) to deliver information to substations efficiently and effectively. The authors examine the performance of the linear network model, providing calculations using sensors with defined characteristics. Then, they present an example reconfigurable network model and illustrate its usage with a simple case study. In the article, scientists emphasize on transmission time delays, mentioning neither energy consumption, nor security.
 consists of extended research proposed in . In this paper, authors study how the data measured on transmission power lines can be delivered efficiently to substations. They try to find the optimal placement of direct wireless links, aiming at minimizing the delay in information delivery. First, researchers again consider the linear network model and prove that it cannot deliver information in a timely fashion and suffers from an imbalance of workload, because the relays closer to the substations have to handle a lot more traffic that those which are located farther away. Therefore, they formulate the general solution for minimizing the delay of data delivery. The main idea is to divide all the nodes into different groups, where each group contains the nodes that send information to the same base station. Finally, using introduced formulas for optimal placement of direct links, researchers provide numerical results and analyze how the number of groups influences time delay and energy consumption. Improvement in minimizing time delays achieved by the proposed method is significant. However, focusing on optimal transmission time, scientists do not consider the security of the transmission.
As recent studies show [2,3,13], the idea of using WSNs in power grids is an important issue. However, none of the works examined above, pays attention to the security of the data traversing through wireless network. Since radio waves in a wireless communication spread in the air, one common risk is that wireless channels are more insecure and susceptible to numerous attacks than wired networks. This is why the security of the electrical transmission power grid using wireless sensor networks is one of the most essential factors in this type of deployments. In this paper, we try to face this problem, considering not only the minimization of time delays, energy consumption and the financial cost, but due to the data sensitivity, also take into account the security of the transmission.
In this section, we describe a transmission line monitoring scheme, proposing an example architecture of a WSN deployed in miscellaneous places of considered transmission line. Moreover, we develop corresponding scenarios, in which we examine the performance, security, the transmission time and delays in defined conditions, providing concise representation of utilized routing algorithms and packet flows. Through a set of simulations, we analyze gathered results to identify the optimal routing scheme in the existing WSN, in order to minimize the transmission time delay. Using gathered results, we try to answer the question, which type of a WSN architecture, along with routing algorithms and security mechanisms, can be used for measuring and monitoring power system phenomena, and is capable of acting as an additional measure of protection, control or operational optimization.
In general, power grid energy subsystem is divided into power generation, transmission, distribution and consumer site. On the other hand, the power grid communication system is composed of several subsystems, being in fact a network of networks. The core of the monitoring and control of a substation is the Supervisory Control And Data Acquisition (SCADA) system. Remaining communication networks in smart grid systems include, for instance, cellular, microwaves, fiber optics, serial links or wireless local area networks. As in this paper we try to answer questions about time delays generated during measurement data transmission, we consider only the transmission grid. A typical transmission line is presented in Figure 2. Within the transmission grid, WSNs deal with power transmission issues and distribution monitoring. Because sensors handle sensitive and confidential power grid data, which should be securely transmitted to prevent from the theft and to avoid any unauthorized access, secure and reliable protocols need to be implemented. Because the nature of WSN makes the system vulnerable to various attacks, physical and cyber threats, we decide to analyze various WSN deployments (which differ in utilized security mechanisms or routing algorithms) and indicate the architecture, that will be the most suitable for different smart grid applications.
An example, 110 kV transmission line, with wireless sensors deployed in miscellaneous places of electric poles (which is the basis of our considerations) is presented in Figure 3. Poles connect transmission lines between two substations. When considering 110 kV transmission lines, the most common distance between two poles ranges between 100 and 300 m. There exist however, transmission lines with the distance between poles equal to 450 m, but these are 400 kV lines, not considered in our case study. In our scenarios, we assumed the distance between two adjacent poles to be equal to 50, 100, 200 and 300 m. There are 43 poles between two substations, and 45 poles in total (including substations). There are 4 sensors and the relay node (also referred to as the sink node) deployed on each pole along the line (Figure 4), which results in 180 deployed sensor devices in total. A group of 4 sensors send the information to the relay on the same pole. After collecting information from all its slave sensors, the sink node on each pole sends the information to the substation. The role of the substation is to transmit the data to the gateway.
Because the distance between sensors and their relay node is quite small, so a short-range communication technology suffices, we utilized TelosB  devices as those located on electric poles and collecting the data (their outdoor range is equal to about 100 m), and Mica2  motes as sink nodes, since their outdoor range is equal to about 300 m (which is quite enough when we consider the distance between two adjacent poles to be equal to 50, 100, 200 and 300 m). Table 2 shows some of the hardware characteristics of the utilized sensor devices.
Mica 2 motes, requiring just 58 mm × 32 mm of silicon, include data RAM, processing, and communication capabilities. The node combined with a tiny battery is capable of periodically reporting its presence for years to come being a a notable example of a general-sensing-class device. It includes a large interface connector allowing its attachment to an array of sensors. By providing a large number of I/O pins and expansion options, the Mica2 is a perfect sensor node option for any application where size and cost are not absolutely critical. It is, for example, easily connected to motion detectors and door-and-window sensors as the foundation of a building security system. Moreover, the Mica2 is capable of receiving messages from nodes attached to high-value assets, including personal computers and laptops, at risk of being stolen. Crossbow’s TelosB mote bundles all the essentials into a single platform including: USB programming, capability, an IEEE radio with integrated antenna, a low-power MCU with extended memory and an optional sensor suite. It offers many features, including, among others, IEEE /ZigBee compliant RF transceiver, to GHz, a globally compatible ISM band and 250 kbps data rate.
When it comes to the measured parameters, in the article we assumed that sensors measure the following quantities: wind speed and direction, atmospheric temperature, pressure and humidity, solar radiation along with conductor current, temperature, tension and inclination. For each pole along the transmission line, each sensor performs 6 measurements (ambient temperature, atmospheric pressure and humidity, solar radiation, wind speed and wind direction are taken into account). Because of the time stamp attached to each measured value, the number of measurements on a pole is doubled, which results in 12 measurement values. There are also 3 current measurements for every phase along with 3 conductor temperature measurements, 4 tension and 4 inclination measurements (for every phase and lighting wire), doubled because of the time stamp. This results in 40 measurement values in total. When we assume that a single measurement can be stored as a single float value (8 bytes), then the size of the packet being transmitted between relay nodes is equal to 320 bytes.
The actual cost of the hardware depends on sensor’s capabilities. An approximate cost of TelosB device, utilized in our scenarios, is equal to $182, while the relay node can be bought for about $152. According to the analysis given in  and the information provided in Table 2, the cost of the deployment of a WSN is indeed lower than the cost of, for instance, cellular links.
The values measured by sensors are important for many different smart grid applications. Described parameters are used by wire thermal models of DLR systems for calculating actual value of allowable current or power transfer through the line. Such measurements can be also used by the power system estimation or optimization software. The thermal time constant of the power line conductor ranges from 5 to 15 min and depends on the actual weather parameters. Acquiring data from the whole power line spans in real time determines line operational safety. Let us consider, for instance, the conductor temperature measurement. It can be used for blocking an automatic reclosing action after clearing a power line short circuit. Usually, the first reclosing time ranges from to s, while the second reclosing time is much longer and ranges from 10 to 20 s. Described reclosing time can be extended if the conductor temperature exceeds maximum allowable value after short circuit. In this particular case, in order to provide the required protection of the relay in the substation, it is essential to know if the measurement data containing conductor’s temperature can be delivered within the assumed time period.
In order to fulfill their role accurately, transmission line monitoring systems need to have continuously updated information on measured parameters. As a consequence, proper synchronization between wireless sensors is required, so that the quantities measured in different points of the transmission line could be directly comparable. Implemented WSN monitoring system must be able to acquire information on different nodes of a network in a synchronized manner and distribute the data. Given the complexity of the system to which is applied, the problem of measurements synchronization is an enormous challenge in and of itself. The synchronization accuracy requirements can significantly change depending on the constraints of the applications which vary from system to system. In environmental monitoring applications there exist several possible synchronization solutions. One of the available approaches is to use time stamps, which can be further compared and analyzed. However, in practice, with Network Time Protocol (NTP)-based solutions it is sometimes difficult to ensure such mechanisms are in use and correctly configured on all sensors. Another possible approach is to use the so-called trigger signal for synchronous triggering of each measurement. With such an external synchronization, the system under test triggers a synchronization signal. This method allows to assess the measurement system response with a great accuracy. In the proposed scenarios, we simply assumed that the measurements are triggered by the synchronization signal, such that they are made at the same time, and further transmitted to the substation with the help of the chosen routing algorithm.
Designing our scenarios, we assumed that a power line is located in an open area, where relay nodes antennas are rather visible for each other. However, since wireless radio channel puts fundamental limitations to the performance of wireless communications systems (radio channels are extremely random, and are not easily analyzed), modeling the radio channel is typically done in statistical fashion, using mathematical transmission models. Because the path loss is the largest and most varied quantity in the received signal strength (link budget), we decided to use this model in our simulations, in order to calculate the quality of connections between sink nodes (in our model referred to as the q value). Among many other factors, the path loss model depends on frequency, antenna height, receive terminal location, relative to obstacles and reflectors and link distance. The free space path loss model is a type of path loss model, which represents realistic empirical propagation loss model, and takes into consideration distance and frequency—for this reason, it is the perfect choice for our estimations. In order to calculate exact q values utilized in our simulations, we used technical specifications of Mica2 and TelosB sensor devices in order to obtain their physical characteristics (RF power, frequency, receive sensitivity). The exact q value calculation formula is given by the following equation:
The q value defines the quality of the connection and is calculated using the above formula. It is represented as non-negative real number (), where the lower the q value, the better the connection quality.
The success of WSN-based monitoring applications requires the delivery of high-priority events to relay nodes without any loss from the original sources to the final destination. For this reason, the issue of a packet loss is another crucial aspect that must be considered while deploying WSNs as a monitoring solution in a transmission power grid. The packet loss can be caused by a number of factors, including signal degradation over the network medium, congestions due to heavy traffic, collisions at link layer or buffer overflows. The transmission distance between sensor nodes and the quality of the connection affect the packet loss as well. There exists a strong correlation between packet loss and signal strength—over long distances, the signal strength fading effect leads to low signal noise ratio. Environmental interferences also contribute to packet loss. The above constraints emphasize the need for improving transmission reliability in WSNs. One of the most common approaches of dealing with the problem of packet loss is the packet retransmission. Packet retransmission requires some additional mechanisms to be implemented in order to trigger the retransmission of a lost packet. For instance, with the acknowledgment (ACK) mechanism, every successfully received packet should be directly acknowledged by the receiver with a notification message. If a sender is not able to recognize the expected notification message, then the packet should be retransmitted. The ACK mechanism offers highest reliability guarantee. It can be used by protocols providing packet reliability on hop-to-hop as well as on end-to-end levels.
Generally, in order to eliminate or minimize the packet loss in the considered transmission grid environment, it is possible to introduce some retransmission mechanisms, or simply increase the signal strength by powering sensors with energy harvesting devices. However, in our case study, we perform only a single round of measurements (being triggered by the synchronization signal), where the measured data is being transmitted on quite short distances. For this reason, we assumed that in our case, the packet loss is almost equal to 0.
After a brief introduction of the considered environment, let us now move on to the discussion about miscellaneous WSN deployments and routing algorithms, which we prepared for our simulations and developed using QoP-ML. We proposed different scenarios, and analyzed them in terms of transmission time delay and security.
As proved in , the linear network model proposed for transmission line monitoring in Figure 3 can not deliver information in a timely fashion. For this reason, instead of focusing only on a linear network model, we decided to analyze some other methods of data delivery as well. Instead of using hop-by-hop communication all over the line (Scenario 5), we proposed to divide all the nodes into smaller groups (called clusters) and route information to cluster heads, such that the cluster heads will communicate directly with the substation (Scenarios 1, 2, 3 and 4). A node that does not connect directly to the cluster head, should send its information in a hop-by-hop manner to the cluster head within its group. The selection of the cluster head is determined by its physical position on an electric pole. The node, which is located on the most appropriate position on a pole (in terms of, for instance, the lack of environmental interferences) is always chosen as a cluster head. The routing algorithm has the information about connections between sensor devices and the quality of paths used by devices for sending packets such that they can reach the substation. It also knows about which of them is the cluster head. We denoted the number of all nodes by , while the number of hops within a single group (cluster) is denoted as . Then, the number of groups on power line (or, the number of direct links to base station) is given as:
and, in our case, . Since the number of electric poles within the power line is fixed, we can assume that is constant and equal to 45. Thus, using the Formula (2), it is fairly easy to calculate (see Table 3). As mentioned before, values represent the number of wireless, cellular links, which the relay nodes (designated cluster heads) use to send collected data directly to the base station. Below we describe each of the prepared scenarios in greater detail.
Scenario 1 Developing the first scenario, we examined the linear network model, assuming that there are 45 poles in total, and the power line has only 2 direct links to the substation. In this type of network (Figure 3), where the data traverses in a hop by hop manner, a relay node not directly connected to the substation sends its information to its neighbour relay, that is closer to the substation. For instance, the relay node on 42nd pole, would send its data to 43rd pole, which can then send its own data, together with the data from pole 42, to pole 44. After reaching the 44th pole, the relay node on this pole sends the data to the substation, which transmits it to the gateway. In order to improve data delivery, we decided to divide considered power line in two sub-lines, where relay nodes on poles 0–21 send the information to the substation on the left side of the power line, while sink nodes on poles from 22 to 44 transmits the data to the second base station, located at the right end of the power line. Due to the fact that the distance between the sensor and the relay node is quite small, so a short-range communication technology suffices, we simply omitted this value in our simulations.
Scenario 2 In this scenario, we consider a situation where every relay node on each of 45 poles has direct connection to the substation. Because there are no intermediary relay nodes between the relay node on a pole and the substation, logically this type of deployment should have the smallest delay time of all (Figure 5). By setting up a direct link to the base station on each pole, the delay will be minimized and the workload among relays will be balanced. Nevertheless, such solution requires the base station to be within sensor’s radio range (so they both can hear each other). Although pretty small time delays speak in a favor of this scenario, there exists just another disadvantage that prevents from using it in practice. This approach is expensive in terms of equipment cost and extra energy consumption of the direct cellular wireless links.
Scenarios 3 and 4 In order to find a compromise between the cost and time delays, we decided to select only some relay nodes to establish direct links to the base station. Relays that are not directly connected to the base station should send their information to cluster heads which have a direct link to base station established. It is thus obvious, that the positions of direct links would influence time delays. To find the optimal arrangement of direct base station connections, we proposed to analyze Scenarios 3 and 4, and examine how the number and the position of direct base station connections affect time delays.
Consider, for instance, Scenario 3, where the number of direct links is equal to 12. Here, relay from node 1 and 3 both send their information to node 2, which has the direct link and will be able to send the data to the substation. Then, nodes from pole 4 and 6 send data to node 5 (which is responsible for communicating with the base station), nodes 7 and 9 send the data to relay on pole 8, which communicates with one of the available substations and can transmit the data using its direct link), and so forth (Figure 6 and Figure 7).
Scenario 5 The last but not least, Scenario 5 examines the performance and time delays when there is only one substation and the measured data from every pole need to traverse the entire power line in order to reach the substation. From the performance point of view, it does not make a sense to consider the number of hops to be greater than 23—the half of the line. The scenario with equal to 45 is considered only as an example, in order to show that the approach, where the number of hops is greater than the half the length of the line, is inefficient and should not be used.
Prepared scenarios, together with factors significant in our analyzes, are summarized in Table 3.
In the article  the QoP-ML was introduced. Proposed solution provides the modeling language for making abstraction of cryptographic protocols that puts emphasis on the details concerning the quality of protection. The intended use of QoP-ML is to represent the series of steps, which are described as a cryptographic protocol. The QoP-ML introduced the multilevel protocol analysis that extends the possibility of defining the state of the cryptographic protocol. Since approaches presented in the literature usually speak for an example of a model driven security, in the light of the available development methodologies, QoP-ML excellently fits in a design known as a Model-Driven Engineering. The Model-Driven Engineering (simply known as MDE) is meant to focus on the creation and utilization of the abstract representations of the knowledge that govern a particular domain, rather than on the computing, algorithmic or implementation concepts. Model-Driven Engineering approach is a broader concept than Model-Driven Architecture (MDA), or Model-Driven Security (MDS). MDE adds multiple modeling dimensions and the notion of a software engineering process. The various dimensions and their intersections together with a Domain-Specific Language (DSL) form a powerful framework capable of describing engineering and maintenance processes by defining the order in which models should be produced and how they are transformed into each other. Serving as a domain-specific language, QoP-ML is capable of expressing security models in a formalized, consistent and logical manner.
As is apparent from the above description, QoP-ML is a flexible, powerful approach to model complex IT environments. Therefore, we utilized it to prepare our case study and evaluate the quality of chosen security mechanisms using its supporting, automatic framework. In the following sections we present all the significant components of the language we utilized to create model for our scenario.
The structures used in the QoP-ML represent high level of abstraction which allows concentrating on the quality of protection analysis. The QoP-ML consists of processes, functions, message channels, variables, and Quality of Proection (QoP) metrics. Processes are global objects grouped into the main process, which represents the single computer (host). The process specifies behavior, functions represent a single operation or a group of operations, channels outline the environment in which the process is executed. The QoP metrics define the influence of functions and channels on the quality of protection. In  the syntax, semantics and algorithms of the QoP-ML are presented in greater detail.
In the QoP-ML, an infinite set of variables is used for describing communication channels, processes and functions. The variables are used to store information about the system or specific process. The QoP-ML is an abstract modeling language, so there are no special data types, sizes or value ranges. The variables do not have to be declared before they are used. They are automatically declared when used for the first time. The scope of the variables declared inside the high hierarchy process (host) is global for all processes defined inside host.
The system behavior is changed by the functions, which modify the states of the variables and pass the objects by communication channels. During the function definition, one has to set the arguments of this function which describe two types of factors. The functional parameters, which are written in round brackets, are necessary for the execution of the function and the additional parameters which are written in square brackets and have an influence on the system’s quality of protection. The names of the arguments are unrestricted.
Equation rules play an important role in the quality of protection protocol analysis. Equation rules for a specific protocol consist of a set of equations asserting the equality of function calls. For instance: the decryption of the encrypted data with the same key, is equal to the encrypted data.
The processes are the main objects in the QoP-ML. The elements which describe the system behavior (functions, message passing) are grouped into processes. In the real system, the processes are executed and maintained by a single computer. In the QoP-ML the sets of processes are grouped into the higher hierarchy process named host. All of the variables used in the high hierarchy process (host) have a global scope for all processes which are grouped by the host. Normally, the variables used in the host process cannot be applied to the other high hierarchy process. This operation is possible only when the variable is sent by the communication channel.
The communication between processes is modeled by means of channels. Any type of data can be passed through the channels. The channels must be declared before the data is passed through. The data can be sent or received by the channels. The channels pass the message in the First In First Out (FIFO) order. When the channels are declared with the non-zero buffer size, the communication is asynchronous. The buffer size equal to zero stands for the synchronous communication. In synchronous communication, the sender transmits the data through the synchronous channel only if the receiver listens on this channel. When the size of the buffer channel equals to at least 1, then the message can be sent through this channel even if no one is listening on this channel. This message will be transmitted to the receiver when the listening process in this channel is executed.
The main reason of introducing the algorithm structure was the need of specifying a series of well-defined successive states, that would let one to precisely define a non-linear and variable execution time of the given function. The inspiration to create this structure was an attempt to determine the packet’s transmission time, which is not directly proportional to the size of the transmitted data. Algorithms were introduced along with the advanced network analysis module in . Algorithm definition is similar to the definition of the function. It is enclosed inside the algorithms structure, and starts with the alg keyword, followed by the algorithm name. Each algorithm has one parameter which is a message being sent in the case of a communication step or a function call expression in the case of an operation in process. Implementing an algorithm in QoP-ML, it is possible to use arithmetic operations, constructions known from the C language, such as conditional statements (if), loops (while), and two pre-defined functions: quality and size. The built-in, quality function can be used only in algorithms, which calculate communication time step and return the quality of the link between the sender and the receiver (that is, the q parameter). On the other hand, the size function is used to determine the size of the algorithm’s parameter, which can be a function call or a sent message (or one of its elements, accessed by index). Inside the algorithms structure, one can define as many algorithms as needed.
The system behavior, which is formally described by the cryptographic protocol, can be modeled by the proposed QoP-ML. One of the main aims of this language is to abstract the quality of protection of a particular version of the analysed cryptographic protocol. In the QoP-ML, the influence of the system protection is represented by the means of functions. During the function declaration, the quality of protection parameters are defined and details about this function are described. These factors do not influence the flow of the protocol, but they are crucial for the quality of protection analysis. During that analysis, the function’s quality of protection (QoP) parameters are combined with the next structure of QoP-ML, the security metrics. In this structure, one can abstract the functions’ time performance, their influence on the security attributes required for the cryptographic protocol or other important factors during the QoP analysis.
Analyzing the transmission line monitoring scheme, we utilized the QoP-ML to prepare a functional model of the considered segment of smart grid environment. We gathered and used real hardware security metrics and developed appropriate scenarios in order to examine how different factors influence transmission time delays, lifetime of sensor devices, confidentiality, integrity, and availability of the data traversing transmission grid environments. In this section, we briefly discuss all the elements we prepared to create the transmission grid models, and analyze the results we gathered using the Automated Quality of Protection Analysis (AQoPA) tool . (The AQoPA tool performs the automatic evaluation and optimization of complex system models which are created in QoP-ML.) In this article, describing our model, we considered only the scenario where . (However, remaining scenarios are designed analogically, so they can be understood without further explanation.) Both the models and the AQoPA tool can be downloaded from the QoP-ML’s project webpage .
In QoP-ML, in order to perform a simulation, one needs to prepare and implement 3 basic elements, namely: the model itself, hardware security metrics and miscellaneous scenarios of the considered environment (which are also called versions). For more information about QoP-ML, its syntax, semantics, algorithms and example usages, please refer to [17,21].
Before we present implementation details, let us describe the model in general. We have 4 types of communicating hosts: manager, sink, sensor and a substation (in our model referred to as the base station). While there is no manager in a real life deployment of the considered scenario, its abstraction in QoP-ML needs the manager to handle proper packet flows. Manager stores lists of sinks and sensors, and knows which sensor is assigned to which sink. Its main role is to send control messages to sink nodes, in order to give them instructions from which sensors they should collect data from, or when it is the best time for performing data transmission to the substation. Sink nodes receive a list of its sensors from manager, generate some parameters, send them to assigned motes and wait for the collected data. After the data is collected, sensors send it to their relay node. When the relay node receives a message from each of four sensors and the manager, it immediately starts the routing to the substation. Communication ends when the base station receives packets from all the sinks that connect directly.
In Listing 1 we present functions prepared for the transmission grid model. Lines 3–7 contain declaration of functions, which are used by the manager to, for instance, handle the division of sensors to proper sinks. Functions representing the type of the message traversing the transmission grid environment are defined in lines 8–14. Symmetric encryption and symmetric decryption functions (lines 16–17) take two functional arguments, the data to be encrypted/decrypted (data) and the key used for encryption/decryption (K). Besides functional arguments, s_enc and s_dec take two additional, QoP parameters: the algorithm used for encryption/decryption, and the size of the key in bytes (key_size). Remaining functions (lines 19–21) refer to data collection, and are fairly self-explanatory.
Equational rules, needed by list operations and symmetric cryptographic functions, are defined in Listing 2, and are simply used to assess the equality of function calls and to reduce complex function calls. For instance, equation in line 7, states that the symmetric decryption of symmetrically encrypted data with the same key returns the encrypted data.
Secure communication between sinks all over transmission line is a crucial element in our simulation: communicating hosts need a medium to exchange messages. For this reason, implementing considered environment, we defined two communication channels (Listing 3): ch_WSN and ch_MGNT. Due to the fact that the manager is only a helper host, and its role is to handle packet flows, the manager uses ch_MGNT channel for sending control information (the one, which has connection quality equal to 1 (the lower the q value, the better the connection quality), and sending time equal to 0 ms). On the other hand, ch_WSN is responsible for transferring all the data traversing a transmission grid. Exact channel characteristics will be discussed while presenting the versions structure.
Moving on to the next point, let us now discuss hosts, which take part in a transmission grid communication, starting with the BaseStation host (Listing 4). The role of the base station is quite simple: in an infinite loop (lines 7–18), it waits for the data from sinks scattered over transmission line (lines 9–10), decrypts the data (only when it was encrypted (lines 12–15)), and saves it for further processing (line 17). Next, we have the Sensor host (Listing 5), responsible mainly for data collection. The very first instruction of the Sensor host is the generation of the symmetric key (line 3), which can be further used for data encryption (line 16). Sensor waits for parameters send by sink (lines 7–9), in which there is an identifier (ID) of the device, which acts as the relay node for this specific sensor. Then, the sensor performs a measurement to gather the data (lines 11–12) and optionally encrypts it (lines 14–17).
After the data is collected, sensor notifies everyone (in this case the manager, since it waits for the message with the data_collected_msg() type, which indicates that the data is gathered, and can be collected by appropriate relay nodes (lines 19–20). When the sink node is ready to receive the data, sensor simply creates a message and sends it to its relay node, using previously obtained SINK_ID (lines 22–23).
The most essential part in our simulation is the routing of the data obtained from all sensors towards the base station (substation actually). Sink devices (Listing 6) are those responsible for data gathering and transmission.
As it can be seen in Listing 6, sink consists of 3 processes, namely Main, WaitForData and HopByHopComm, where each of them is in charge of different operations. The Main process (lines 6–22) waits for data from Manager, in order to receive the list of sensors, from which sink needs to gather the data (lines 8–9). After receiving the list of sensors, sink generates parameters and sends them to sensor devices (lines 13–20). The actual data gathering takes place in WaitForData process (lines 24–43), in which sink node waits for data gathered by sensors (lines 31–32). If the data was encrypted, sink decrypts it (lines 34–37), saves (line 39) and adds it to the list of the collected packets (line 40). The last, HopByHopComm process, is responsible for the actual data transmission towards the base station (lines 44–65). Its role is to receive the initial data from Manager or another Sink within its cluster (lines 47–48). Then, sink decrypts the data (but only if it was encrypted (lines 50–53)), adds the data to the list of the gathered packets containing measurement information (line 55) and performs the routing algorithm, in order to find the ID of the next sink on a path to the base station (line 56). Remaining instructions concern optional encryption (lines 58–61) of the collected data and the process of sending it to another sink, towards the base station. HopByHopComm process ends when packets from all the cluster heads reach the base station.
Last but not least, the Manager, represents a host, which is not available (and not needed) in a real-life deployment. As brought up earlier, its role is to work only as a helper host, which manages packet flows in our simulation. The Manager host consists of two processes, namely PrepareMessages and Main processes. The PrepareMessages process is responsible for creating lists of sensors, which belong to appropriate sinks along the transmission line (lines 7–39). Another function of this process is to maintain the list of sinks (lines 41–64), to which Manager has to send the initial message, which then starts the data gathering and routing process. Acting as a fundamental managing process, Main process’ first job is to inform each sink along the transmission line, from which sensors they can collect data from. Manager does that by sending the nodes_msg() type message (lines 69–84) to every sink. After sinks notification, Manager copies the list of all the available sinks (line 86) and collects information indicating end of data gathering process from TelosB sensors (lines 88–92). When all nodes_msg() messages are gathered, Manager prepares an empty list, which will be further send to each cluster, starting data routing process (being optionally encrypted before (lines 95–99)). Next, a list of all sinks (which are initiators of data routing within a cluster), is copied to a temporary variable (line 101). The last activity of the Manager host is to send the empty message to all the sinks, which initiate communication towards the base station within a given cluster (lines 103–109). Manager simply takes an ID of the sink from previously prepared list (line 105), removes it from mentioned list (line 106) and prepares a message (an empty list, actually, line 107), which is next send to each sink, one by one, until the list ends (line 108).
In order to determine the transmission time of the packets being sent through the transmission grid, we implemented an algorithm, which calculates the transmission time between two sensors (Listing 7). The transmission time of a single packet is equal to constant 18 ms plus ms per each byte. The while loop is used to handle messages with the payload larger than 110 bytes, which is the maximum payload size in ZigBee (assuming that header has 17 bytes size; the maximum size of packet is 127 bytes) . When the maximum size is exceeded, payload is divided into many packets with 110 bytes each.
Another important step in QoP-ML’s modeling is the gathering of security metrics. This process can be fully (or partly) automated, using the Crypto Metrics Tool . Hardware security metrics (like, for instance, time taken by encryption) can be gathered by the tool, while remaining ones can be added by hand. In Listing 9 one can see metrics obtained from TelosB devices.
Let us focus on, for instance, lines 26 and 27 from Listing 9. Here, the details about the encryption process performed by TelosB devices are defined. As we can see from the metric’s header (line 26), we have some information about the considered operation type, utilized algorithm, the size of the encryption key, the time taken by the encryption itself (in ms), and finally, the size of the encrypted message (in bytes). Exact values, corresponding to elements available in the metric’s header, are given as follows.
The operation type (function name) is s_enc, alg refers to the utilized encryption algorithm, which, in this case is AES-CTR. The key_size is equal to 128 bytes, while the remaining parts concern time and output size calculations (line 27). Gathered security metrics are utilized during the simulation process to take into account the influence of the hardware specifications on performed operations. For more information about security metrics itself, its syntax, semantics and usage, please refer to [17,19,21].
Building the versions structure is considered the final step before the actual simulation. Preparing multiple versions, one is capable of defining different sequences of events, without a need to modify the model. In the considered transmission grid analysis, we examine only two main scenario types: with (version Enc_Updated) and without (version NoEnc_Updated) packet encryption (Listing 10). Analyzed scenarios differ only in hosts’ instruction start-up process: the Enc_Updated version, except running base processes, additionally starts up sub-processes responsible for cryptographic operations. Let us now discuss the role of each instruction, as they are used in both version structures.
Lines from 5 to 8 (the set instructions) link hosts with appropriate security metrics, so that the operations performed by those hosts make use of suitable values (bear in mind that, for instance, encryption time can be different for TelosB and MicaZ motes). The following lines (10–27) are those which differ between versions: in NoEnc_Updated version, run instructions start only basic processes, without even knowing about sub-processes, while Enc_Updated (lines 93–110) additionally executes commands responsible for packet encryption. Another important role of the run instruction is the possibility to define how many host instances one can start. As shown in Listing 10, our WSN consist of 45 MicaZ sinks, 180 TelosB sensor devices, a base station, and the manager. Communication between utilized devices is defined inside the communication structure, between 29th and 84th line. Here, communication channel characteristics for each medium (such as the time taken by sending and receiving packets or current values) are given (lines 31–34).
The topology of the utilized WSN is specified with the help of the topology structure (lines 36–63). Every connection defined in topology structure consists of communicating hosts, and an arrow which expresses the actual packet flow. Consider, for instance, the 51st line, where one can see two communicating sites: sink and base station. As can be seen, second from 45 sinks sends the packet straight to the base station (as indicated by an arrowhead, which points to the right, that is, from sink to base station). The topology structure is a very flexible and adaptable part of QoP-ML, since it allows defining details of each connection separately, taking into account its key characteristics. For more information about versions structure, refer to [17,19,21].
In this section we study the results gathered for several different scenarios, including variation in the number of groups, direct connections to the substation, routing algorithms and security mechanisms. We compare the results of our proposed formulation and evaluate time delays of the network, including the cases of constrained cellular coverage. The simulation results of various implementation scenarios of WSNs and overhead smart grid networks aim at highlighting their performance and examining how inherent and imposed factors influence the maximum delay time of the different arrangement scenarios. The below analysis concerns only the results obtained for the transmission line, where the distance between two adjacent poles is equal to 100 m. Remaining results (for distances equal to 50, 200 and 300 m) are summarized in the Appendix section, in Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14 and Table A15. All the system parameters as well as their default values, that are involved in the following analysis, are reported in Table 3. These values help towards the establishment of realistic implementation scenarios. Note that all the assumptions of Section 3 are also satisfied.
Table 4 contains the results gathered for the first scenario (Scenarios 1a,b). In this scenario, we consider the transmission line to be divided into two sub-lines, where each of them has only one direct link to the substation (). Thus, the number of hops (intermediary relay nodes), obtained data should traverse to finally reach the substation, is equal to 22 for the left sub-line, and 23 for the right sub-line (, ). The size of the measurement data gathered by each sensor at every tower is the same, and is equal to 80 bytes. It takes about 4000 ms for TelosB to transmit it to the relay node. Because there are 4 measurement sensors at each pole, the size of the data received by the relay node is 320 bytes, and the time taken for its transmission is calculated using the algorithm available in QoP-ML model.
As we look at the results in Table 4, we can see that the time needed by sink to send the measurement data to the neighboring relay node increases along with the number of hops, and it is always the longest for the last node on line (sink 22 and sink 45). Furthermore, the difference between transmitting encrypted and unencrypted data grows rapidly as well. Additionally, the results in Table 4 indicate that this linear network model suffers from an imbalance of workload. The relays located closer to the substations have to handle a lot more traffic than those sitting farther away on the line. Considering the topological constraints posed by the transmission lines, the low bandwidth, low data rate wireless nodes fail to transmit huge amounts of data in a multi-hop manner.
As a consequence of analyzing results gathered for Scenario 1, it was necessary to identify a more efficient way of delivering collected data to substations. In order to balance the workload of transmitting sinks, we established Scenario 2, where each relay on every pole along the transmission line has a direct connection to the substation. The size of transmitted data and transmission times are the same as in Scenario 1. Table 5 consists of the results gathered for this Scenario 2a,b. As is evident from Table 5, by dividing all nodes into stand-alone groups, we managed to solve the issue of imbalanced workload: the time needed by each sink is almost equal, no matter if we consider the first, or the last relay node on line. Moreover, the difference in transmission times between encrypted and unencrypted traffic is almost constant—it does not change as severely as in Scenario 1. Although this approach has many advantages (balanced workload, the constant time difference between various security options), there exist some drawbacks as well. One of them is cost—deploying cellular transceivers on each tower is a very expensive solution. While such a network can provide extremely low latency data transmission, this model is highly cost inefficient as it incurs huge installation and subscription costs. This is why this solution can not be used in real-life deployments.
Because the problem of finding optimal locations of cellular transceivers is such a complex task, that two previous scenarios failed to succeed, we proposed to consider another approach (Scenarios 3a,b, 4a,b). Since the idea of splitting relay nodes into groups succeeded in terms of workload balancing, we decided to adhere to established rules. However, we modified them such that we divided relay nodes into less than 45 groups, in order to keep the balance and additionally decrease cost. As we take a look at the results in Table 6 and Table 7, we see that the performance of each sink is still stable, even with the smaller number of groups ( for Scenario 3, and for Scenario 4). Also here, the time increases along with the number of hops, but the growth is not so rapid as in the case of Scenario 1. Moreover, the difference between secure and insecure version remained more or less constant.
As the main role of wireless sensors in our research is to monitor electric power systems and react in a timely manner to possible power system disturbances, the above examinations can be a great help. Performing analyzes with QoP-ML, we are able to choose the appropriate network architecture, routing algorithms, and security mechanisms for reliable and accurate monitoring of specific power system actions and disruptions. As previously stated, in power grid systems, 4 main disturbances can be distinguished, namely: wave, electromagnetic, electromechanical and thermodynamic disruptions. Such classification of power grid disturbances is related to the time range, within the given event should be spotted and reported. In order to let smart grid operators react to occurred power grid disturbance in an appropriate way, the time between the event occurrence and the time it has been reported by monitoring wireless sensors should fall within a suitable range (Figure 1, Table 1). Simulation results helped us to decide, which of the proposed scenarios should be used for monitoring given power grid disturbances.
We made a correlation between the number of relay nodes on the way to substation, the time taken by sinks to deliver measured data together with power grid disturbance time frames (Figure 8, Figure 9 and Figure 10). Additionally, in Table 9 we gathered existing power systems transients and using simulation results, determined which from prepared scenarios can be used for monitoring specific transmission grid phenomena. It turned out, that different scenarios are capable of monitoring different phenomena, and in order to achieve the most reliable, useful results and react in a timely fashion to power grid disturbances, a profound analysis of the transmission grid architecture should be accomplished. As it is evident from Table 9, wireless sensors seem to be not the best choice when it comes to wave and electromagnetic phenomena monitoring. (This applies to Scenarios 1, 2, 3 and 4. For this type of power grid disturbances, one needs a faster solution (for instance, the optical fiber). When we take a look at the time requirements of electromechanical and thermodynamic phenomena monitoring, it is clear that all prepared scenarios can handle this task. However, from the performance and economic point of view, it is preferable to choose Scenario 4 instead of Scenario 3 or 2 (since it requires fewer direct connections to the substation, which directly influence the total cost of WSN deployment and is characterized by a better capability for load balancing). Although we did not consider the scenario, where the data needs to traverse all sinks along the power transmission line in order to reach the base station as a promising WSN architecture for smart grids from the beginning, the above analysis confirmed this thesis even more.
The issues raised by the first scenario can be easily noticed in Figure 8 (left graph), which contains time results for the left-subline (nodes of numbers from 1 to 22) and right sub-line (nodes from 23 to 45). The problem of workload imbalance can be observed for nodes, which reside closer (in terms of distance) to the substation (such as, for instance, node number 22 and 45). The time needed by them to serve network requests is equal to about min (while the nodes which reside at the beginning of each sub-line need less than a minute to perform their task), and, what is more important, is not constant for each wireless node.
Scenarios, where wireless sensors are divided into groups, bring a significant improvement, as evidenced by Figure 8 and Figure 9. Here, the workload is more or less stable and decreases for each sensor along with . Time delays are as well not so big as in the case of Scenario 1 and range between a couple of seconds (both for encrypted and unencrypted version). They, however, increase together with the number of hops inside the group. The most stable and delay-aware from the above 3 , is the second scenario. At the same time, however, it is the most expensive one to deploy.
On the other hand, the comparison of transmission times of encrypted and unencrypted data for Scenarios 5a,b can be observed in Figure 10. As it can be seen, the difference between transmission time is significant here. Gathered results clearly indicate, that when the encryption is used, the transmission time grows rapidly, causing greater time delays. On the other hand, when we consider the non-encrypted data transmission, it is evident that this approach performs better in terms of minimizing time delays. For the transmission line consisting of 45 electric poles, the longest transmission time is equal to, approximately 18 min.
Analyzing the results gathered for different scenarios, it is worth examining how the distance between the poles affects the transmission time. As it can be observed from the results in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14 and Table A15Table A15, when the distance between the poles changes, transmission time delay slightly increases. Consider, for instance, nodes 1, 22 and 45. The time difference between first scenarios for different distances and node 1 is equal to about one tenth of a second. (The same is true for Scenarios 2–5). Node 22, located in the middle of the transmission line, is as well characterized by negligible differences in transmission time when considering different distances. For node 22, only Scenarios 1 and 5 have slightly larger changes in transmission time when comparing the delays on given distances. Identical situation applies to sensor 45. The time difference between different distances exists in every scenario. However, it is so small, that it is almost imperceptible.
As mentioned in the Scenarios section, each of the proposed scenarios has two versions (Table 3): the secure version (always referred to as the (b) scenario, where, among others, the encryption is used) and the second, insecure version, where there are no security mechanisms at all (the (a) scenarios). For instance, scenario 1a, 2a or 5a are insecure, while scenarios labelled with (b) (1b, 2b, ...) are secure versions of the proposed solution. In Table 10 we gathered some fundamental security concepts (such as confidentiality, integrity, authentication, freshness and availability) and evaluated proposed scenarios in terms of these security attributes, in order to find out which of these scenarios provide the selected security features.
Confidentiality of data in a sensor network is achievable only if those with access to network are authorized to do so. Under no circumstances should sensor readings leak outside the network. The standard approach for preventing this from happening is to use encryption. The secure version of the introduced protocol uses encryption in order to protect the information from disclosure to unauthorized parties. In the case of the insecure version, because of the lack of the encryption, we can not protect information confidentiality.
When it comes to integrity, both solutions can not guarantee it. However, if data integrity is a specific concern, one should use a cryptographic hash function but combined with an an encryption algorithm. Symmetric ciphers (such as the Advanced Encryption Standard (AES) algorithm working in counter mode (CTR) mode utilized in our secure scenarios) do not (by themselves) provide integrity, because they do not detect malicious or accidental modifications to data. In order to provide integrity, the solution is to wrap the data inside packages with the data that can be used to validate the integrity of the package, typically a hash-based checksum.
The process of authentication is very important in preserving network data integrity and preventing unauthorized access to the network. Without the authenticating mechanisms, an attacker can easily access the network and inject dangerous messages without the receivers of the data knowing that the data being used originates from malicious source. In the secure version of the protocol, the authentication is assured in a way that we use individual encryption keys for every node taking part in communication process. Encryption keys are hard-coded in every device. Insecure scenarios do not provide authentication.
The choice of good security mechanism for WSNs depends on network application and environmental conditions. In a transmission grid environment, where wireless sensors are deployed on an electric pole, the availability should not be a big concern. Despite the batteries, sensors can be equipped with a solution, that could power up sensors, using the energy from the electric line. In the case of the disturbance in power supply from the electric line, wireless nodes can use the backup source of energy (the batteries). However, this solution brings an additional risk: a new kind of the Distributed Denial of Service (DDoS) attack, the delayed distributed denial of service, referred to as the Delayed Distributed Denial of Service (DDDoS) attack . The DDDoS attack is especially dangerous for WSNs, where the energy is one of the constrained resources: it can decrease the lifetime of the sink node by slowly and imperceptibly consuming its valuable power, leading to total exhaustion of energy resources and being unnoticed by the traditional DDoS defense mechanisms.
Wireless sensors, intelligent substations and communication devices provide real-time information on system health so that smart grid operators can pro-actively prevent many issues. Using real-time information from wireless sensors and automated controls to detect and respond to system problems, a smart grid can automatically avoid or mitigate power outages, power quality problems, and service disruptions. However, all of the existing communication links in a smart grid environment introduce vulnerabilities, especially if they can be accessed over by a wireless medium. Thus, serious security analysis is needed to ensure that implemented solutions are truly adequate. In this paper, we proposed a reconfigurable network model and examined it in terms of security and time delays. Our objective was to minimize the time delays, depending on the number of sinks, clusters and confidentiality of the transmission. We did, however, take into account neither energy constraints, nor cost of deployment. Prior to simulations, we presented the general formulation of network architectures, scenarios and time delay problem. First, we analyzed the linear network model and came to conclusion that it suffers from the time delay issue and imbalance of workload. Nevertheless, we proved that its performance can be improved by careful choice of the position of direct wireless links. Later, after we investigated the first model, we utilized an optimization approach in order to minimize the maximum time delay. By examining different scenarios, we confirmed that the improvement made by the proposed solution is significant: Minimizing time delays, we managed to decrease the cost of network deployment ensuring secure communication at the same time.
The approach proposed in this paper defines an efficient, low cost, and easily configurable network model. It is generic and encompasses a variety in several factors. Its flexibility and the ease of reconfigurability lets us examine miscellaneous network configurations and decide which of them will satisfy our needs best. We believe that the result of this paper can provide invaluable guidances for smart grid developers, and help them design future smart grid to be well balanced in terms of performance, cost and time delays.
This work was realized with the collaboration of all authors. Bogdan Ksiezopolski and Michal Wydra contributed to the idea of this work. Katarzyna Mazur created the model, prepared simulations, analysed the data and wrote the biggest part of the article. Bogdan Ksiezopolski and Michal Wydra helped with the editing of the writing. Michal Wydra contributes with the related work part, which refers to the transmission lines. Bogdan Ksiezopolski organized the work, provided the funding, defined and supervised the research and was involved in structuring the paper. All the authors contributed to the production of the paper, the discussion of the results and the reviewing process of the intellectual content of the article and finally approved the manuscript.
The authors declare no conflict of interest.