|Home | About | Journals | Submit | Contact Us | Français|
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.
Recently, fog computing has drawn a great deal of attention. It is a quite novel computing paradigm that extends cloud computing facilities and services to the edge of the network to provide computing, networking, and storage services between end devices and data centers [1,2]. Fog computing devices are located between endpoints and the traditional cloud, thus resources and services are available and are closer to the end-users, and the delays induced by service deployments can be reduced [3,4]. Compared with the cloud computing concept, which is more centralized, fog computing provides resources and services in a distributed way. Combined with the traditional cloud, faster and more convenient computing services are provided to nearby devices based on their own computing, storage and network capacity . Since fog devices are localized, it provides low-latency communication and more context awareness . With all these advantages, the fog computing paradigm is well positioned for big data and real time analytics.
Fog computing is a quite novel computing paradigm that aims at moving the cloud computing (CC) facilities and services to the access network, in order to reduce the delays induced by service deployments. Although big data and the Internet of things (IOT) still rely on cloud computing, as the number of networking devices and data volume are increasing dramatically, fog-cloud computing can effectively solve the bottleneck problem of data transmission and data storage. However, since fog devices are located at the edge of the network and are of much lower cost than cloud servers, they are more easily compromised and have lower trustworthiness [7,8], especially in the process of data sharing. Therefore, secure and efficient access control schemes in fog-cloud computing environment need to be implemented [9,10]. Compared with traditional data access control schemes in cloud computing, the network structures and system models in the fog-cloud computing environment are different. Fog devices can provide computing, networking, and storage services for users, such that less communication and computational cost is left for users to do, therefore, cloud, fog and end-users should be considered in the new access control scheme.
Ciphertext-policy attribute-based encryption (CP-ABE)  is regarded as one of the most suitable technologies to realize fine-grained access control. This technique allows data owners to implement access control by setting up access structures. Compared with single-authority CP-ABE schemes, in multi-authority CP-ABE schemes, attributes are from different domains and managed by different authorities. Moreover, it does not have the single point of failure and system bottleneck problem, which makes multi-authority CP-ABE schemes more practical for data access control in the fog-cloud computing environment.
However, the processes of encryption and decryption in CP-ABE systems are time-consuming. The computation for data owners and users is a great overhead. To outsource part of the encryption and decryption computation to a cloud server is a solution. However, the server may be “lazy”. It may not follow the algorithm, and only execute part of the computations or deliberately return incorrect results. Therefore, a verification method of the outsourced encryption and decryption needs to be proposed. Besides, user and attribute revocation is another issue in CP-ABE systems. On the one hand, the users in the system may change frequently, and on the other hand, the attributes of users may also change, and revocation of any attribute may affect other users who share the same attribute. However, most existing schemes cannot support flexible and efficient user and attribute revocation in multi-authority cloud storage systems. The key update and ciphertext re-encryption operations are time-consuming. Therefore, verifiable outsourced multi-authority CP-ABE schemes with efficient and flexible user and attribute revocation need to be proposed.
In 2007, Bethencourt et al.  put forward the first CP-ABE scheme. Over the last decade, many CP-ABE schemes [12,13,14,15,16,17] were proposed. However, most of them are time-consuming and lack efficiency. To improve the efficiency and reduce the overhead of users, several schemes which support outsourced computation and revocation are proposed:
Green et al.  proposed an outsourcing decryption ABE scheme. In their scheme, the traditional private keys are divided into user keys and transformation keys. Thus, complex decryption computations are outsourced to the cloud server, and users only need one exponentiation operation to recover the plaintext. However, their scheme cannot be applied to multi-authority systems. Based on this method, Yang et al. [19,20] put forward two multi-authority CP-ABE schemes which support outsourced decryption. Li et al.  also proposed an outsourced ABE scheme which supports both outsourced key-issuing and decryption. However, they did not consider the correctness of results from the cloud server.
To solve this problem, Lai et al.  introduced the verifiability of ABE and proposed a verifiable outsourced decryption ABE scheme. But in their scheme, both the length of the ciphertext and the computational of encryption are doubled. Later, Li et al.  presented an outsourcing ABE scheme with checkability which supports both outsourced key-issuing and decryption. However, the length of ciphertext and the amount of expensive pairing computations grow with the number of attributes.
To address this problem, two ABE schemes [24,25] in which the length of ciphertext is constant are put forward. However their constructions cannot be applied to ABE schemes with Linear Secret Sharing Schemes (LSSS). Mao et al.  proposed a generic construct of attribute-based encryption with verifiable outsourced decryption. Their CPA-secure construct has more compact ciphertext and less computational costs. Users only need a constant number of simple computations to decrypt the ciphertext.
Ostrovsky et al.  first proposed a fine-grained user revocation scheme based on CP-ABE that supports negative clauses. With the help of a semi-trusted service provider, Ibraimi et al.  put forward a CP-ABE scheme which achieved immediate attribute revocation for the first time, but their construct cannot be applied to an outsourcing environment. Yu et al.  presented a CP-ABE scheme where proxy encryption technology was introduced. The scheme achieves immediate attribute revocation, at the same time, the proxy server also share the authority job, hoowever, the proxy server needs to be online all the time. Another CP-ABE scheme with fine-grained attribute revocation was put forward by Hur et al. . They use attribute group keys to re-encrypt the ciphertext, but their scheme cannot prevent collusion attacks. Another revocable CP-ABE scheme was proposed by Xie et al. . In their scheme, the key update computations are greatly reduced. Later, Yang et al.  put forward a proxy-assisted CP-ABE scheme which provides efficient cloud data sharing and user revocation.
In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construct, most of the encryption and decryption computation is outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Our contributions can be summarized as follows:
The remainder of this paper is organized as follows: we first give some preliminaries in Section 2. Then, we give the definition of the system model and framework in Section 3. In Section 4, we propose our VO-MAACS construct. Section 5 describes the security and performance analysis of our scheme. Finally, the conclusions are given in Section 6.
In this section, some fundamental background used in this paper is provided, including bilinear maps, access structure and linear secret sharing scheme (LSSS).
Let , and be three cyclic groups of prime order . A bilinear map is a map with the following properties:
Let be a set of parties. A collection is monotone if : if and then . An access structure (respectively, monotone access structure) is a collection (respectively, monotone collection) of non-empty subsets of , i.e., . The sets in are called the authorized sets, and the sets not in are called the unauthorized sets.
We recall the description of LSSS as follows . Let be a secret sharing scheme over a set of parties with realizing an access structure . We say that is a linear secret sharing scheme over if:
In this section, the system model and framework of our scheme are described.
A simple three level hierarchy is adopted in our fog-cloud system as illustrated in Figure 1. In this framework, each terminal device is connected to a nearby fog device. Fog devices are interconnected and each of them is linked to the cloud.
In general, a layer of fog is added between the cloud server and terminal devices so that some computations on the cloud server can be delegated to the fog devices which are closer to the terminal devices. Thus, different tasks from different regions can be executed by the corresponding fog devices simultaneously, which greatly improves the efficiency. Fog devices are responsible for data transmission and data storage. Moreover, they are also in charge of part of the encryption and decryption computations. The cloud server is responsible for storing the ciphertext and the user proxy keys, as well as the ciphertext re-encryption operations and user proxy keys update operations when revocation occurs.
Our multi-authority fog-cloud system consists of six entities: a cloud service provider (CSP), fog devices (FDs), a global certificate authority (CA), attribute authorities (AAs), data owners (DOs) and data users (DUs), as shown in Figure 2.
CA is a fully trusted global certificate authority in the system. It accepts the registration of all AAs and DUs in the system, and it is responsible for issuing a global unique identity for each DU and a unique identity for each AA. However, it does not participate in any attribute management and any generation of secret keys associated with attributes.
Each AA is an independent attribute authority that is responsible for issuing, revoking and updating users’ attributes within its administration domain. In our scheme, each AA is responsible for generating a public attribute key for each attribute it manages and a user private key which consists of user proxy key and user secret key for each DU. Especially, is stored at CSP and is kept by DU.
DOs define access control policies over attributes from multiple attribute authorities and then encrypts the data following those policies. After that, they upload the encrypted data to the CSP.
The CSP is responsible for storing the ciphertext and the user proxy keys, and provides data access service to DUs. It is also in charge of the ciphertext re-encryption operations and user proxy key update operations when revocation occurs.
FDs are responsible for data transmission and data storage. Moreover, they are also in charge of part of the encryption and decryption computations. They can help generate part of the ciphertext for DOs, as well as decrypt part of the ciphertext for DUs. Only for those DUs whose attributes satisfy the access policy will FDs decrypt the ciphertext with their proxy keys. After that, they send the partially decrypted data to the corresponding DUs.
DUs can request their secret keys from the relevant authorities. After downloading any encrypted data from the CSP, a DU first asks a FD to decrypt it with his proxy key. If the attribute set of the DU meets the access policy, then the FD decrypts the ciphertext and sends the partially decrypted data to the DU. Upon receiving the partially decrypted data from the FD, the DU can recover the data with his secret key.
In our multi-authority fog-cloud system, we assume that the CA is fully trusted in the system. Each AA is also trusted, but it can be corrupted by an adversary. The CSP and FDs are semi-trusted. They may leak the encrypted data to some malicious users, but will execute the tasks assigned by each authority. DUs are assumed dishonest and may collude to obtain unauthorized access to data.
Global Setup . The global setup algorithm is run by CA. On input the security parameter and attribute universe description , it outputs the global parameter , the user identity and the authority identity .
Authority Setup . The authority setup algorithm is run by each authority. On input of the authority identity , it outputs public attribute keys for all attributes issued by each authority and a pair of authority public key and authority secret key . Here denotes the involved authority set.
Encrypt_out . The outsourced encryption algorithm is run by the FD. On input of the global parameter and a set of public attribute keys , it outputs the partially encrypted ciphertext .
Verify_enc . The outsourced encryption verification algorithm is run by a DO. On input of the global parameter and a partially encrypted ciphertext , it outputs a bit , indicates the FD outputs the correct result, indicates the FD outputs the incorrect result.
Encrypt_user . The user encryption algorithm is run by a DO. On input of the global parameter , a set of authority public keys , a set of public attribute keys , a partially encrypted ciphertext , a message and an access structure , it outputs the ciphertext .
KeyGen . The key generation algorithm is run by each authority. On input of the global parameter , the user identity , a set of user attributes , the authority secret key , and a set of attribute public keys , it outputs user proxy key and user secret key .
Decrypt_out . The outsourced decryption algorithm is run by a FD. On input of the global parameter , the proxy keys and the ciphertext , it outputs the partially decrypted ciphertext .
Verify_dec . The outsourced decryption verification algorithm is run by a DU. On input of the global parameter , the ciphertext and a partially decrypted ciphertext , it outputs a bit , indicates the FD has output the correct result, indicates the FD has output the incorrect result.
Decrypt_user . The user decryption algorithm is run by a DU. On input of the ciphertext , the partially decrypted ciphertext and the user secret key , it outputs the message .
URev . The user revocation algorithm is run by the CSP. On input of the revoked user identity and the proxy key list , it outputs the updated proxy key list .
ReKeyUpdate . The key update algorithm is run by the involved authorities. On input of the of each non-revoked user, the proxy key and the current attribute version key , it outputs the version update key and the proxy update key .
CTUpdate . The ciphertext update algorithm is run by a DO. On input of the version update key and the partially encrypted ciphertext , it outputs the ciphertext update key .
PxKUpdate . The proxy key update algorithm is run by the CSP. On input of the of each non-revoked user, the current proxy key and the proxy update key , it outputs a new proxy key for each non-revoked user who has the attribute .
ReEnc . The re-encryption algorithm is run by the CSP. On input of the current ciphertext and the ciphertext update key , it outputs a new ciphertext .
In this section, we give the concrete construction of VO-MAACS which is based on , together with the verification method and revocation scheme.
Global Setup . The global setup algorithm takes a security parameter and a small attribute universe description as input. Let , and be the multiplicative groups with the same prime order , and be the bilinear map. Let be the generator of and be the generator of . Let be a hash function and be a hash function which maps attributes to an element in , such that the security will be modeled in the random oracle. CA then chooses a random number and sets the global parameter as . Each authority, fog device and user should register itself with the global authority during the global setup process. CA then assigns a unique global authority identity to each legitimate authority and a unique global user identity to each legitimate user.
Authority Setup . Let denote the set of all attributes managed by and denote the involved authority set. first chooses two random exponents . For each attribute , chooses an attribute version key as and generates the public attribute keys as . Then it publishes as its public key and keeps as its secret key.
Encrypt_out . FD first chooses a random number and computes . For , it randomly picks and computes:
Then, it outputs the partially encrypted ciphertext .
Encrypt_user . Let be a matrix, where denotes the total number of all the attributes. The function maps rows of the matrix to attributes. DO first chooses a random secret exponent and a random vector with as its first entry, where are used to share the secret exponent . For , it computes , where is the vector corresponding to the i-th row of . After that, it randomly chooses and computes:
are used to correct the shares of and randomize . is used to verify the result of outsourced decryption. Then, it outputs the intact ciphertext .
KeyGen . first assigns a set of attributes to each legal user, then chooses a random number for each user and let as the user secret key. Then, runs the key generation algorithm to generate the user proxy key as:
The proxy keys are sent to CSP who will add them in its proxy key list as , and the user secret keys are sent to the corresponding DUs.
Decrypt_out . When a user queries the encrypted data in the system, CSP will first check his attribute set. If his attributes does not satisfy the access policy, CSP outputs . Otherwise, it sends the ciphertext and the corresponding proxy keys to FD. FD first chooses a set of constants such that, if are valid shares of the secret according to , then , where . Then it computes:
After that, FD sends the partially decrypted ciphertext to the user.
Decrypt_user . Upon receiving the partially decrypted ciphertext from FD, the user runs the user decryption algorithm to decrypt the ciphertext by using its secret key . It computes as:
There exists such a situation that the FD may be “lazy”. It may not follow the algorithm, only execute part of the computations or deliberately returns incorrect results. If this happens, a DO cannot notice the error, and large part of the computations will be affected. Therefore, we propose a verification method which can verify the result of outsourced encryption and outsourced decryption.
Our verification method includes two algorithms:
Verify_enc . Upon receiving the partially encrypted ciphertext from FD, DO first verify whether holds. If it does not holds, DO outputs , which indicates FD returns incorrect result. Otherwise, DO computes and , where . Then, it picks a security parameter , and randomly chooses . After that, it computes and . If , DO outputs , which indicates the FD returned the correct result. Otherwise, it outputs .
Here, we adopt the idea of . We do not check the results in the ciphertext one by one. Instead, we use the batch verification algorithm to check together. Obviously, our solution is much more efficient than the normal verification method.
Verify_dec . Upon receiving the partially decrypted ciphertext from FD, the user computes , if holds, it outputs , which indicates FD returns the correct result. Otherwise, it outputs , which indicates that the FD has returned an incorrect result.
In our scheme, when user revocation happens, we do not need to re-encrypt the ciphertext and update other non-revoked users’ secret keys. The only operation we need is to send a user revocation message which contains the of the revoked user to the CSP, and then let the CSP delete the revoked user’s proxy key . Without the correct , the FD cannot perform the outsourced decryption algorithm for the revoked user. Thus, the revoked user cannot recover the original data. The user revocation algorithm is described as follows:
URev . When the CSP receives the user revocation message from a DO, it then deletes the proxy key corresponding to the from the list and outputs the updated proxy key list .
There are two phases in attribute revocation: Key update and Ciphertext re-encryption.
Phase 1: Key update
The key update in turn includes three steps: RekeyUpdate, CTUpdate and PxKUpdate.
Let denotes all other non-revoked users except the revoked user with . The involved authority first generates a new attribute version key . It then computes the version update key as . After that, it applies to compute the proxy update key as for each non-revoked user who has the attribute . Then updates the public attribute key of the revoked attribute as , and broadcast a message for each DO such that they can get the updated public attribute key of the revoked attribute. After that, is sent to the CSP to update and is sent to the DO.
Upon receiving the version update key and the partially encrypted ciphertext , DO computes the ciphertext update key as . Then, is sent to the CSP to update the ciphertext.
Upon receiving the proxy update key , CSP updates the corresponding proxy keys as for each non-revoked user who has the attribute . Then the proxy keys are updated as:
Phase 2: Ciphertext re-encryption
Upon receiving the ciphertext update key , CSP updates the corresponding ciphertext as . Then the new ciphertext is published as:
Apparently, we can conclude that most of the update and re-encryption work is outsourced to the CSP, which greatly reduces the overhead of DOs. Meanwhile, we do not need to update the entire ciphertext and user proxy keys. Only those components which are involved with the revoked attribute need to be updated. In this way, our scheme can greatly improve the efficiency of attribute revocation.
In this section, a comprehensive analysis of VO-MAACS is provided, including security analysis and performance analysis.
The correctness of our scheme can be easily proved by the following equations:
When there is no attribute revocation:
When the attribute is revoked from a user whose identity is :
Therefore, VO-MAACS satisfies correctness.
In our system, only for users whose attributes satisfy the access policy, will the FD decrypt the ciphertext for them by using their proxy keys. Users whose attributes do not satisfy the access policy, cannot receive the partially decrypted ciphertext from the FD. Thus, they are not able to recover the original data. When a user is revoked, his proxy key will be deleted by the CSP. Without the proxy key, he cannot obtain the partially decrypted ciphertext either. Therefore, for users whose attributes do not satisfy the access policy, our solution satisfies the data confidentiality.
In addition, although the CSP and FD can get user proxy keys, however, if they do not obtain the user secret keys, they still cannot decrypt the ciphertext. Similarly, they cannot collude with other users to recover the data either. Therefore, for the CSP and FD, our solution also satisfies the data confidentiality.
In our system, each user is assigned with a unique identity uid, and each key issued by different AA is associated with a uid. Therefore, only the keys associated with the same uid can be used to decrypt the ciphertext. Other users cannot collude to decrypt the ciphertext. In addition, there exists a situation that some AAs may issue the same attributes. Since each AA has a unique identity aid, all attributes are distinguishable. Therefore, users cannot replace some of the components in the keys from the AA by using the component in the key from another AA.
We implement our scheme in Charm , a framework developed to facilitate the rapid prototyping of cryptographic schemes and protocols. It is based on the Python language which allows the programmer to write code similar to the theoretical implementations. Charm also provides routines for applying and using LSSS schemes needed for Attribute-Based systems. All our implementations are executed on an Intel® Pentium® CPU G630@270 GHz with 4.00 GB RAM running Ubuntu14.04 64-bit system and Python 2.7.
In our experiment, access policies are generated in the form of a1, a2, …, an, where ai is an attribute. We set 20 distinct access policies in this form with N increasing from 20 to 200, and repeat each instance 20 times and take the average values as the experiment results. We simulate the computing time incurred in encryption and decryption. Since our scheme is based on Lewko’s scheme , we compare our scheme with  in user encryption time and user decryption time. In our experiments, the number of attributes per authority is set to 10. The times for outsourced encryption are shown in Figure 3a,b, respectively.
In Figure 3a, the Encrypt_out time is approximately 0.1~1.4 s, and it increases almost linearly with the number of attributes. In Figure 3b, since major computations are outsourced to the FDs, only a few operations are left for DOs. Therefore, the Encrypt_user time in our scheme is much less than that in . Similarly, Figure 4 describes the time for outsourced decryption and user decryption. In Figure 4a, the Decrypt_out time is approximately 0.3~3 s, and like the Encrpyt_time, it also increases linearly with the number of attributes. In Figure 4b, as major computations are outsourced to FDs, only a few operations are left for DUs, therefore, the Decrypt_user time in our scheme is much less than that in .
The computing cost for verification of outsourced encryption is shown in Figure 5. The time for Verify_enc is approximately 0.1~0.8 s and it increases almost linearly with the number of attributes. Figure 6 describes the comparison of computing cost of CSP, AA and DO in the attribute revocation process.
In fact, most computing overhead, such as proxy keys update and ciphertext re-encryption are outsourced to the CSP, and only a few computations are left for AAs and DOs. Therefore, the computing cost for DOs can be greatly reduced. Apparently, our scheme requires less time for both encryption and decryption than Lewko’s scheme, and the computing cost for DOs in the attribute revocation process is greatly reduced. Therefore, we can conclude that our scheme’s computation efficiency is much better than that of Lewko’s scheme.
To realize data access control in fog-cloud computing system, we have proposed a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construct, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we designed an efficient user and attribute revocation method for this. Finally, the analysis and simulation results show that our scheme is both secure and highly efficient.
This work has been financially supported by the National Key Research and Development Program under Grant 2017YFB0802304, the National Natural Science Foundation of China (No. 61303216 and No. 61272457), Natural Science Basic Research Plan in Shaanxi Province of China (No. 2017JM6004), the Open Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University, China (No. ICT170312), the Research Project of National Time Service Center, Chinese Academy of Sciences, China (No. 2017FWCGCZ0124) and National 111 Program of China B16037 and B08038.
K.F. and J.W. conceived and designed the experiments; K.F. and J.W. performed the experiments; X.W. contributed analysis tools; H.L. and Y.Y. analyzed the data; K.F. and J.W. wrote the manuscript. All authors read and approved the manuscript.
The authors declare no conflict of interest.