International Journal of Computational Intelligence Systems

Volume 11, Issue 1, 2018, Pages 45 - 57

Entropy Measures of Probabilistic Linguistic Term Sets

Authors
Hongbin Liu1, lhbnwpu@hotmail.com, Le Jiang2, jlmath@126.com, Zeshui Xu3, xuzeshui@263.net
1School of Mathematics and Information Science, Henan University of Economics and Law, Zhengzhou, Henan 450046, China
2School of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, Henan 450000, China
3Business School, Sichuan University, Chengdu, Sichuan 610064, China.
*Corresponding author.
Received 11 May 2017, Accepted 14 September 2017, Available Online 1 January 2018.
DOI
10.2991/ijcis.11.1.4How to use a DOI?
Keywords
Probabilistic linguistic term set; fuzzy entropy; hesitant entropy; total entropy
Abstract

The probabilistic linguistic term sets (PLTSs) are powerful to deal with the hesitant linguistic situation in which each provided linguistic term has a probability. The PLTSs contain uncertainties caused by the linguistic terms and their probability information. In order to measure such uncertainties, three entropy measures are proposed: the fuzzy entropy, the hesitant entropy, and the total entropy. The fuzzy entropy measures the fuzziness of the PLTSs, and the hesitant entropy measures the hesitation of the PLTSs. To facilitate the computation of all uncertainties contained in the PLTSs, the total entropy is proposed. Some properties and some formulas of the entropy measures are introduced. A multi-criteria decision making model based on the PLTSs is introduced by using the proposed entropy measures. An illustrative example is provided and the comparison analysis with the existing method is given.

Copyright
© 2018, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

1. Introduction

Fuzzy sets (FSs)32 have provided great convenience in modeling uncertainties, and have been applied successfully to many fields. Many extensions of FSs were introduced, such as the intuitionistic fuzzy sets1,28, the hesitant fuzzy sets (HFSs)17,18, the dual HFSs36, and the hesitant fuzzy linguistic term sets (HFLTSs)15. The HFSs facilitate decision makers when they are hesitant on providing preferences, which permit multi-valued membership degrees. Under linguistic environment, Rodíguez and Martínez14 provided an overview on the relationship of the process of computing with words and the decision making. Rodíguez et al.13 justified the use of HFLTSs in complex linguistic context. The HFLTSs can similarly permit decision makers to express their qualitative assessments by using several linguistic terms, and they are generally transformed from comparative linguistic expressions close to human’s cognitive process. In order to overcome the limitation of the HFLTSs whose linguistic terms are consecutive, the extended HFLTSs (EHFLTSs)19,22 were introduced, whose linguistic terms may be valued as any term in a linguistic term set. The linguistic terms in the HFLTSs and the EHFLTSs are generally viewed as equally important since no additional information can be obtained from them directly. Liu and Rodíguez9 introduced the fuzzy envelope for HFLTSs to embody the different importance degrees of the linguistic terms in the HFLTSs from an intuitive viewpoint. Zhang et al.33 introduced the possibility distribution assessments based on the expression form of discrete FSs. Such an expression extended the proportional linguistic 2-tuple20 to a more general form, and provided the symbolic proportion of each linguistic term in a linguistic term set, which can be viewed as the original idea of the PLTSs. There are also some research results on the hesitant fuzzy preference relations25,26 and the application of the HFLTSs27.

Recently, Pang et al.12 introduced the probabilistic linguistic term sets (PLTSs) which are composed of the EHFLTSs with each linguistic term having a probability indicating its frequency in group decision making, or the importance of the term, or the degree of belief on that linguistic term expressed by a decision maker. The PLTSs had been investigated from different viewpoints. Bai et al.2 introduced a new comparison method for the PLTSs. Gou and Xu7 proposed some novel operational laws for the linguistic terms, the HFLTSs and the PLTSs. Zhang et al.34 discussed the additive consistency of probabilistic linguistic preference relations based on graph theory. The similar case of the PLTSs in quantitative context was also investigated, and the probabilistic HFSs (PHFSs)30 were introduced, which import probability information to the HFSs. In Ref.30, it was also introduced a consensus building model based on the maximizing score deviation model and aggregation operators for PHFSs. The probabilistic dual HFSs8 were introduced to deal with the risk evaluation problems.

Entropy was originally used to measure the uncertainty contained in a probability distribution. Later on, it was extended to measure the fuzziness contained in a fuzzy set4,10. Pal and Bezdek11 gave a comprehensive review of the entropy of FSs and the methods to combine the fuzziness and probability information of FSs. Under hesitant fuzzy environment, different forms of entropy were proposed. Xu and Xia29 introduced the entropy and cross-entropy for HFSs, and applied the entropy to TOPSIS method in multi-attribute decision making. Farhadinia5 further developed some distance-based entropy measures. Wei et al.21 introduced some new entropy measures which combined both the score function and the deviation function of HFSs into a unified form, and utilized the entropy to compute the criteria weights in multi-criteria decision making. Zhao et al.35 introduced the two tuple entropy which considers both the fuzziness and the nonspecificity of the HFSs.

All of the above entropy measures are suitable for HFSs which have no probability information. New entropy measures should be developed since the probability information in the PLTSs cannot be omitted. Let us consider two PLTSs: L(P)(1) = {s1(0.5), s2(0.5)} and L(P)(2) = {s1(0.01), s2(0.99)} based on a linguistic term set S = {s0,…,s8}. From an intuitive viewpoint it can be seen that they contain different degrees of uncertainty although the linguistic terms used in them are identical. The first PLTS is totally hesitant since both probabilities of the linguistic terms are 0.5, and the second one contains less hesitation since s2 plays an important role because of its high probability and as a result the PLTS behaves like the single linguistic term s2. In this situation, an entropy representing the uncertainties contained in the PLTSs should embody the differences on probability information. In PLTSs, the probability represents the randomness of the appearance of the linguistic terms. Each linguistic term in a PLTS represents a certain degree of fuzziness and multiple linguistic terms in the PLTS represent a certain degree of hesitation if the PLTS contains two or more linguistic terms. To consider all of the uncertainties including probability, fuzziness, and hesitation of a PLTS, and motivated by Refs.21,24,29,35, we propose some new entropy measures which can deal with all of the above uncertainties contained in the PLTSs. We then apply the entropy measures to determine the criteria weights and further develop a multi-criteria decision making model based on the fuzzy TOPSIS method.

The remainder of this paper is organized as follows: Section 2 reviews the PLTSs and entropy measures of HFSs, Section 3 introduces the entropy measures for PLTSs, Section 4 introduces a multicriteria decision making model, Section 5 presents an illustrative example and Section 6 concludes the whole paper.

2. Preliminaries

In this section, some basic concepts including the PLTSs and the entropy measures of HFSs are reviewed.

2.1. PLTSs

The PLTSs are defined by considering probability information in EHFLTSs.

Definition 1. 12

Let S = {s0,s1,…,sg} be a linguistic term set, a PLTS is defined as:

L(P)={ li(pi)|liS,pi0,i=1,,#L(P),i=1#L(P)pi1 }.

Note that if i=1#L(P)pi=1, then the PLTS is provided with complete information, and if i=1#L(P)pi<1, then only partial probability information is known. In order to make the summation of the probabilities to be one, a normalization process is done by using the following formula:

L(P)={ li(pi)|liS,i=1,,#L(P) },
where pi=pi/(i=1#L(P)pi).

In such a way a normalized PLTS is obtained. The set of normalized PLTSs is denoted as L¯(P).

For a PLTS L(P)L¯(P), the expectation was computed as E(L(P))=sα¯, where α¯=i=1#L(P)piI(li), and I(·) denotes the subscript of the linguistic term. Some operations of PLTSs were also defined in Refs.7,12. We only use the complement of a PLTS as follows:

(L(P))c={ (sgli)(pi)|i=1,,#L(P) }.

The distance between two PLTSs was defined based on the distance between each element of the PLTSs, which requires that the number of elements of the two PLTSs to be equal. The method in Ref.12 adds the smallest linguistic term in the shorter PLTS with the probability 0 several times until the shorter PLTS has the same number of elements as the longer PLTS. To avoid the complexity of computation, we propose a new distance between two PLTSs based on the expectation.

Definition 2.

Let L(P)(l), l = 1,2 be two PLTSs, the distance between them is defined as:

d(L(P)(1),L(P)(2))=| E(L(P)(1))E(L(P)(2)) |g

2.2. Entropy measures of HFSs

An important issue of the HFS is the measurement of the information contained in it. The commonly-used measure is entropy. Different forms of entropy measures have been proposed for HFSs. Here we mainly review the two tuple entropy which is composed of the fuzziness and nonspecificity of the HFSs.

Definition 3. 35

Let H be the set of HFSs, and EF,ENS : H → [0,1] be two functions, the pair (EF,ENS) is called a two tuple entropy, if it satisfies the following conditions:

  1. (i)

    EF(α) = 0 if and only if α = {0} or α = {1};

  2. (ii)

    EF(α) = 1 if and only if α = {0.5};

  3. (iii)

    EF(α) ⩽ EF(β), if ασ(i)βσ(i) ⩽0.5, or ασ(i)βσi ⩾ 0.5, # α = # β, i = 1,…,# α;

  4. (iv)

    EF(α) = EF(αc), where αc is the complement of α, which is expressed as αc = ∪αiα {1 − αi};

  5. (v)

    ENS(α) = 0 if and only if there is only one element contained in α;

  6. (vi)

    ENS(α) = 1 if and only if α = {0,1};

  7. (vii)

    ENS(α) ≤ ENS(β) if |ασ(i)ασ(j) | ≤ | βσ(i)βσ(j) | for # α = # β, i = 1,…,# α;

  8. (viii)

    ENS(α) = ENS(αc).

The fuzziness and nonspecificity EF, ENS can be called the fuzzy entropy and hesitant entropy respectively.

3. Entropy measures of the PLTSs

In this section, the fuzzy entropy and hesitant entropy of the PLTSs are introduced, then the total entropy is proposed to combine the two entropy measures.

3.1. The fuzzy entropy of the PLTSs

For any linguistic term liS, it is easy to transfer the term into a value in [0,1] by using αi = I(li)/g. Since the fuzzy entropy of HFSs can be applied for one value αi ∈ [0,1], we propose the fuzzy entropy of the PLTSs by considering fuzzy entropy of the linguistic terms and the probability information in the PLTSs.

Definition 4.

Let L(P)={li(pi)|i=1,,#L(P)}L¯(P) be a PLTS, and αi = I(li)/g, i = 1,…,#L(P). The fuzzy entropy of the PLTSs is defined as

E¯F(L(P))=i=1#L(P)piEF(αi),
where EF is the fuzzy entropy of the HFSs defined in Definition 3.

Let us give some properties of the fuzzy entropy of the PLTSs.

Proposition 1.

The fuzzy entropy defined in the Definition 4 has the following properties:

  1. (i)

    ĒF(s0(1)) = ĒF(sg(1)) = 0, and further ĒF ({s0(p),sg(1 − p)}) = 0;

  2. (ii)

    ĒF(sg/2(1)) = 1;

  3. (iii)

    ĒF(L(P)(1)) ≤ ĒF(L(P)(2)) if l i(1)l i(2)sg/2, or l i(1)l i(2)sg/2, and P(1) = P(2), #L(P)(1) = #L(P)(2), i = 1,…,#L(P)(1);

  4. (iv)

    ĒF(L(P)) = ĒF(L(P)c).

Proof.

  1. (i)

    Since α0 = I(s0)/g = 0, αg = I(sg)/g = 1, and EF (0) = EF(1) = 0, thus we have ĒF(s0(1)) = 1 · EF(0) = 0, ĒF(sg(1)) = 1 · EF(1) = 0, and ĒF ({s0(p),sg(1 − p)}) = p·EF(0)+(1 − p)EF(1) = 0;

  2. (ii)

    Since αg/2 = 0.5, EF(0.5) = 1, we obtain that ĒF(sg/2(1)) = 1 · 1 = 1;

  3. (iii)

    If l i(1)l i(2)sg/2, then α i(1)αi(2)0.5. From the property of EF, we have p i(1)EF(α i(i))p i(2)EF(α i(2)), and thus ĒF (L(P)(1)) ≤ ĒF(L(P)(2)). The proof of the case l i(1)l i(2)sg/2 is similar;

  4. (iv)

    Since L(P)c = {(sgli)(pi)|i = 1,…,#L(P)}, and EF(α) = EF(αc), the conclusion ĒF(L(P)) = ĒF(L(P)c) follows naturally.

From the Proposition 1, we can obtain the following property.

Proposition 2.

The proposed fuzzy entropy in Definition 4 coincides with the entropy measure defined in Ref.6 and Ref.23 if pi = 1/(#L(P)).

Proof.

The proof is divided into two parts.

  1. (i)

    It is noted that the linguistic term set used here, S ={s0,…,sg}, is different from the one used in Ref.6, that is, S′ = {s−τ,…,sτ}. But they are essentially identical by setting

    μ(li)=2τgliτ,
    for liS, and then we have μ(li) ∈ S′. We need to prove that ĒF(L(P)) in Definition 4 satisfies the conditions of the entropy measure in Ref.6 if pi = 1/(#L(P)). Actually it only needs to prove that 0 ≤ ĒF(L(P)) ≤ 1 since
    E¯F(L(P))=1#L(P)i=1#L(P)EF(αi),
    and 0 ≤ EF(αi) ≤ 1. The left conditions are the same as in Proposition 1. Thus the result holds.

  2. (ii)

    Similarly, we can transform the linguistic term set S = {s0,…,sg} in our proposal to the one used in Ref.23, that is, S″ = {s″0,…,s″g}, by setting ν(li) = li/2, for liS, and we have í(li) ∈ S″. The proof of the ĒF(L(P)) satisfying the left conditions is straightforward and it is omitted here.

From the above properties, we know that the HFLTSs and the EHFLTSs can be viewed as the special cases of the PLTSs. In a HFLTS HS or an EHFLTS EHS, there are no probability information is provided and thus the linguistic terms can be viewed as equally important. If we impose a probability pi to each term liHS or EHS, then pi = 1/(#HS) or 1/(#EHS). In this sense, the fuzzy entropy of the PLTSs are more general than the entropy of the HFLTS and the EHFLTSs.

Intuitively, the fuzzy entropy of a LPTS measures the amount of fuzziness contained in it. For an element li(pi) ∈ L(P) = {li(pi)|liS, i = 1,2,…,#L(P)}, the fuzziness contained in li(pi) is composed of two parts, one is the fuzziness of li, the other is the probability pi. We explain this point by presenting a practical example. Let us consider the safety evaluation of a car based on a linguistic term set S = {s0 : extremely bad,…,sg/2 : medium,…,sg : extremely bad}. If li = s0, pi = 1, then the safety of the car is extremely bad, and the decision may be “not to buy”. If li = sg, pi = 1, then the decision may be “buy” since the safety of the car is extremely good. If li = sg/2, pi = 1, then the decision may be hesitant between “buy” and “not to buy” since the safety lies in the margin of good and bad. This case shows that the linguistic term li may bring some fuzziness. On the other hand, if pi = 0, then li will not appear, and if pi = 1, then li will appear and the fuzziness of li(pi) is solely determined by li. If 0 < pi < 1, then li may appear or not, and since li contains fuzziness itself, the fuzziness of li(pi) is determined by pi and li collectively. By summarizing the fuzziness of all elements in L(P), we can obtain the fuzziness, i.e., the fuzzy entropy of L(P).

For an element li0(pi0) ∈ L(P), if pi0 → 1, then pj → 0, ji0 and ĒF(L(P)) → EF(αi0), which indicates that li0(pi0) plays an important role in de termining the fuzzy entropy of L(P). Similarly, if pi0 → 0, then pi0 · EF(αi0) → 0, which means that the importance of li0(pi0) is almost negligible in L(P). These results are consistent with the intuition that the linguistic terms with low probability are less important than the terms with high probability in a PLTS.

The expression of ĒF depends on the form of EF. In Ref.11, it was given some formulas of entropy, based on which, we propose the following fuzzy entropy measures of PLTSs:

(i).E¯ F1(L(P))=1ln2i=1#L(P)pi[αilnαi+(1αi)ln(1αi)];
(ii).E¯ F2(L(P))=2(i=1#L(P)(pimin{αi,1αi})q)1q,q1;
(iii)E¯ F3(L(P))=1e1(i=1#L(P)pi[αie1αi+(1αi)eαi1]);
(iv).E¯ F4(L(P))=1i=1#L(P)pi|12αi|;
(v).E¯ F5(L(P))=1(1q)ln2i=1#L(P)piln[α iq+(1αi)q],q>0,q1;
(vi).E¯ F6(L(P))=4qi=1#L(P)piα iq(1α)q,0<q<1.

The properties of EF were deeply investigated and for more details we refer to Ref.11.

It is interesting that a new entropy can be constructed as a function of some known entropy measures6. Motivated by this idea, we can construct a new fuzzy entropy in the following form:

E¯F(L(P))=Φ(E¯ F1(L(P)),,E¯ Fk(L(P))),
where Φ is monotone non-decreasing, and Φ(0,…,0) = 0, and Φ(1,…,1) = 1, and E¯ Fi(L(P)),i=1,,k are fuzzy entropy measures of L(P).

Based on this idea, a special fuzzy entropy can be obtained as a convex combination of the aforementioned six fuzzy entropy measures, that is,

E¯F(L(P))=i=16λiE¯ Fi(L(P)),
where λi ∈ [0,1], i = 1,…,6, and i=16λi=1. Especially, if λi = 1/6,i = 1,…,6, then the convex combination reduces to the arithmetic mean of the six types of the fuzzy entropy, which can minimize the absolute deviation of the six fuzzy entropy measures.

To illustrate the computational process of the fuzzy entropy, we provide an example as follows:

Example 1.

Let S = {s0,s1,…,s8} be a linguistic term set, and

L(P)(1)={s3(0.4),s4(0.6)},L(P)(2)={s2(0.5),s4(0.5)},L(P)(3)={s2(0.25),s3(0.5),s4(0.25)},

be three normalized PLTSs. For simplicity, we set the parameters in the fuzzy entropy measures as: q = 1 in E¯ F2, q = 1/2 in E¯ F5,E¯ F6. The results of the fuzzy entropy measures and the arithmetic mean of them are shown as in Table 1.

L(P)(1) L(P)(2) L(P)(3)
E¯ F1 0.9818 0.9056 0.9300
E¯ F2 0.9000 0.7500 0.7500
E¯ F3 0.9761 0.8794 0.9098
E¯ F4 0.9000 0.7500 0.7500
E¯ F5 0.9980 0.9500 0.9634
E¯ F6 0.9873 0.9330 0.9506
ĒF 0.9600 0.8613 0.8757
Table 1.

The computation results of the fuzzy entropy.

From the results, we can see that except for E¯ F2 and E¯ F4, the other fuzzy entropy measures produce the same ranking as

E¯ Fl(L(P)(1))>E¯ Fl(L(P)(3))>E¯ Fl(L(P)(2))

for l = 1, 3, 5, 6.

The arithmetic mean of all the entropy measures also produce the same ranking. If q = 1 in E¯ F2, then the entropy measures E¯ F2 and E¯ F4 produce the same ranking, that is, for l = 2,4,

E¯ Fl(L(P)(1))>E¯ Fl(L(P)(2))=E¯ Fl(L(P)(3)).

Thus they cannot discriminate L(P)(2) and L(P)(3).

Let us investigate E¯ F2 in detail. To clarify the influence of the parameter q in E¯ F2, we give the function of E¯ F2(L(P)(l)) with respect to q, l = 1,2,3 as shown in Figure 1.

Fig. 1.

The functions of E¯ F2(L(P)(l)), l = 1,2,3.

If q > 1 in E¯ F2, then

E¯ F2(L(P)(1))>E¯ F2(L(P)(2))>E¯ F2(L(P)(3)).

The result is not the same as most of the other entropy measures. This can be proved in a brief way.

Actually, if a > b > 0, we have

(aq+bq)1/qa,q+.

Therefore, if q → +∞, then

E¯ F2(L(P)(l))maxi(p i(l)min{α i(l),1α i(l)})β(l).

It can be obtained that β(1) = 0.6, β(2) = 0.5, β(3) = 0.375, and β(1) > β(2) > β(3), and thus

E¯ F2(L(P)(1))>E¯ F2(L(P)(2))>E¯ F2(L(P)(3)).

All of the above results can be seen from Fig. 1.

From the example, we can see that sometimes the computational results of the fuzzy entropy measures are not consistent. To avoid this drawback we recommend the use of the convex combination of such fuzzy entropy measures.

In the more general cases, Pal and Bezdek11 defined the multiplicative and additive classes of entropy measures. Here we review them briefly.

Let f : [0, 1] → R+ be a function satisfying f′(x) > 0, f″(x) < 0, and g^(x)=f(x)f(1x), h^(x)=f(x)+f(1x), and g(x)=g^(x)min0x1{g^(x)}, h(x)=h^(x)min0x1{h^(x)}. Then the two entropy measures H1=K1i=1ng(μi) and H2=K2i=1nh(μi) satisfy the conditions of the fuzziness in Definition 3.

Based on these results, we can obtain a new class of fuzzy entropy of the PLTSs. Let

ϕ^(x)=λg^(x)+(1λ)h^(x)=λf(x)f(1x)+(1λ)(f(x)+f(1x))
where λ ∈ [0, 1], and
ϕ(x)=ϕ^(x)min0x1ϕ^(x)max0x1ϕ^(x)min0x1ϕ^(x),
then
E¯F(L(P))=i=1#L(P)piϕ(αi)
is a fuzzy entropy of the PLTSs. The proof of the property is trivial since the function ϕ^(x) is a convex combination of the functions g^(x) and h^(x), and thus they share the similar properties. By setting different forms of the function f , we can obtain different fuzzy entropy measures of the PLTSs.

3.2. The hesitant entropy of the PLTSs

The hesitant entropy defined in Definition 3 cannot consider the probability information in the PLTSs. Therefore, in the following we redefine the hesitant entropy for PLTSs which can deal with the probability and the hesitation contained in the PLTSs.

Definition 5.

Let L(P)={li(pi)|i=1,,#L(P)}L¯(P) be a PLTS, and γij = |αiαj|, i, j = 1,…,#L(P). The function E¯H:L¯(P)[0,1] is called the hesitant entropy of the PLTSs if it satisfies the following conditions:

  1. (i)

    ĒH(L(P)) = 0 if and only if L(P) = {l1(1)}, that is, L(P) only contains one element;

  2. (ii)

    If L(P) = {l1(p1), l2(p2)}, and p1 → 1, p2 → 0, then ĒH(L(P)) → 0;

  3. (iii)

    ĒH(L(P)) = 1 if and only if L(P) = {s0(0.5), sg(0.5)};

  4. (iv)

    ĒH(L(P)(1)) ≤ ĒH (L(P)(2)), if #L(P)(1) = #L(P)(2), P(1) = P(2), and γ ij(1)γ ij(2);

  5. (v)

    If L(P) = {l1(p1), l2(p2)}, and γ12 → 0, then ĒH(L(P)) → 0;

  6. (vi)

    ĒH(L(P)) = ĒH(L(P)c).

By considering the above conditions, we can give the hesitant entropy of the PLTSs as:

E¯H(L(P))={ i=1#L(P)j=i+1#L(P)4pipjf(γij),#L(P)2;0,#L(P)=1,
where f : [0,1] → [0,1] is strictly monotone increasing, that is, f (0) = 0, f (1) = 1.

The hesitant entropy measures the deviation of the linguistic terms in the LPTS and also considers the probability information of such terms. For li(pi), lj(pj) ∈ L(P), ij, the bigger the deviation of li and lj, i.e., the value of γij, the bigger the hesitancy contained in L(P). On the other hand, the biggest hesitancy achieves when pi = pj, since in this case li,lj have the same probability to appearance. Adding the hesitancy of any pairs in L(P) we can obtain the overall hesitancy, i.e., the hesitant entropy of L(P).

In the following we prove that the ĒH(L(P)) expressed by the Eq. (14) meets the requirements in Def. 5.

Proposition 3.

The ĒH(L(P)) expressed by the Eq. (14) is a hesitant entropy.

Proof.

It only needs to prove that the ĒH(L(P)) satisfies the following conditions:

  1. (i)

    If L(P) = {l1(1)}, then it is obvious that ĒH(L(P)) = 0.

    By using reduction to absurdity, we assume that there exist at least two different elements l1(p1),l2(p2) ∈ L(P), and ĒH(L(P)) = 0. In this case, ĒH(L(P)) ≥ 4p1p2 f (γ12). Therefore, we can obtain that p1 = 0 or p2 = 0, or f (γ12) = 0. As a result, one element does not exist or the two elements are identical, which contradicts the assumption. Thus ĒH(L(P)) = 0 leads to the conclusion that L(P) = {l1(1)}. ĒH(L(P)) = 0 if and only if L(P) = {l1(1)}, that is, L(P) only contains one element;

  2. (ii)

    If L(P) = {l1(p1), l2(p2)}, and p1 → 1, p2 → 0, then ĒH(L(P)) = 4p1p2 f (γ12) → 0;

  3. (iii)

    If L(P) = {s0(0.5),sg(0.5)}, then ĒH(L(P)) = 4 · 0.5 · 0.5 · 1 = 1. If ĒH(L(P)) = 4p1p2 f (γ12) = 1, then we have 4p1p2 f (γ12) ≤ 4p1p2 ≤ 4(p1 + p2)2/2 ≤ 1, and the equation holds when p1 = p2 = 0.5, and we have f (γ12) = 1 which leads to the conclusion that γ12 = 1. Therefore L(P) = {s0(0.5),sg(0.5)};

  4. (iv)

    The conclusion follows naturally because of the monotonicity of the function f and the assumption that P(1) = P(2);

  5. (v)

    If L(P) = {l1(p1), l2(p2)}, and γ12 → 0, then ĒH(L(P)) = 4p1p2 f (γ12) → 0;

  6. (vi)

    We assume that γ ijc=(gI(li))/g, then | αiαj |=| α icα jc |, i.e., γij=γ ijc, and thus f(γij)=f(γ ijc). On the other hand, L(P) and L(P)c have the same probability information, we have ĒH(L(P)) = ĒH(L(P)c).

Proposition 4.

The proposed hesitant entropy in Definition 5 coincides with the hesitant entropy defined in Ref.23 if pi = 1/(#L(P)).

Proof.

If pi = 1/(#L(P)), then

E¯H(L(P))={ i=1#L(P)j=i+1#L(P)4f(γij)(#L(P))2,#L(P)2;0,#L(P)=1,

From Definition 5 we know that ĒH(L(P)) satisfies the conditions of the fuzzy entropy in Ref.23.

The regular increasing monotone (RIM) function31 used in the calculation of OWA weights satisfies the requirements of hesitant entropy. There are actually numerous such functions can be found in the literature. In the following we only give several simple examples:

(i).f1(x)=xr,r>0;
(ii).f2(x)=sinπx2;
(iii).f3(x)=1cosπx2;
(iv).f4(x)=ln(1+x)ln2;
(v).f5(x)=ax1a1,a>0,a1;
(vi).f6(x)=2xx+1.

The hesitant entropy measures of PLTSs generated from fl(x) are denoted as E¯ Hl(L(P)),l=1,,6.

In the similar way as the fuzzy entropy, a new class of hesitant entropy can be constructed by using the convex combination of the above examples, that is,

E¯H(L(P))=i=16λiE¯ Hi(L(P)),
where λi[0,1],i=1,,6,i=16λi=1. Also we can obtain the arithmetic mean of the hesitant entropy measures when λi = 1/6, i = 1,…,6.

To demonstrate the behavior of the hesitant entropy, we present a numerical example.

Example 2.

Let L(P)(l), l = 1,2,3 be the same as in Example 1. Set r = 1 in E¯ H1(L(P)), and a = 2 in E¯ H5(L(P)). We calculate their hesitant entropy by using the six forms and their arithmetic mean. The results are shown in Table 2.

L(P)(1) L(P)(2) L(P)(3)
E¯ H1 0.1200 0.2500 0.1875
E¯ H2 0.1873 0.3827 0.2908
E¯ H3 0.0184 0.0761 0.0382
E¯ H4 0.1631 0.3219 0.2504
E¯ H5 0.0869 0.1892 0.1378
E¯ H6 0.2133 0.4000 0.3222
ĒH 0.1305 0.2700 0.2045
Table 2.

The computation results of the fuzzy entropy.

From the results, we can see that all the hesitant entropy measures produce the same ranking as

E¯ Hl(L(P)(2))>E¯ Hl(L(P)(3))>E¯ Hl(L(P)(1)),
for l = 1,…,6, and thus, the arithmetic mean of them also obtains the same ranking. In this view, any one of the hesitant entropy measures is suitable for PLTSs. But the relative differences in values are obviously distinct and here we think that the convex combination may be more suitable.

Additionally, the hesitant entropy of L(P)(2) is greater than L(P)(1) since it has more hesitation both in linguistic terms and the probability. The hesitant entropy of L(P)(2) is greater than L(P)(3) since it is totally hesitant on s2 and s4, while L(P)(3) is not so hesitant since s3 plays a major role because of its high probability and s2 and s4 influence the hesitation to a less content. The reason that the hesitant entropy of L(P)(3) is greater than L(P)(1) seems to be very obvious since it contains more hesitation both on the linguistic terms and the probability. These observations fit well with our intuition.

3.3. The total entropy of PLTSs

The fuzzy entropy and hesitant entropy measure the fuzziness and hesitation contained in PLTSs respectively. They reflect different uncertain information of the PLTSs but they determine the total uncertainties contained in the PLTSs collectively. In this section, we develop a total entropy of PLTSs which combines the fuzzy entropy and the hesitant entropy in a unified form. Compared with the two tuple entropy35, such a form will facilitate the computation of the entropy.

Definition 6.

Let L(P)L¯(P) be a PLTS. The function E¯T:L¯(P)[0,1] is called the total entropy if it satisfies the following conditions:

  1. (i)

    ĒT (L(P)) = 0 if and only if L(P) = {s0(1)} or L(P) = {sg(1)};

  2. (ii)

    ĒT (L(P)) = 1 if and only if L(P) = {sg/2(1)} or L(P) = {s0(0.5),sg(0.5)};

  3. (iii)

    ĒTL(P)(1)ĒT (L(P)(2)), if ĒF (L(P)(1)) ≤ ĒF(L(P)(2)), and ĒH(L(P)(1)) ≤ ĒH(L(P)(2));

  4. (iv)

    ĒT (L(P)) = ĒT (L(P)c).

Proposition 5.

The proposed total entropy in Definition 5 coincides with the total entropy defined in Ref.23 if pi = 1/(#L(P)).

Proof.

If pi = 1/(#L(P)), then the PLTS L(P) can be viewed as an EHFLTS in which the linguistic terms are retained and the probabilities are deleted. We prove the ĒH(L(P)) satisfies the conditions of the total entropy in Ref.23.

  1. (i)

    It is obvious that ĒT (L(P)) = 0 if and only if L(P) = {s0(1)} or L(P) = {sg(1)};

  2. (ii)

    If L(P) = {sg/2(1)}, then ĒT (L(P)) = 0;

  3. (iii)

    Suppose that L(P)(k)={l i(k)(p i(k))|i=1,,#L(P)(k)}, k = 1,2. If |l i(1)sg/2||l i(2)sg/2|, and γ ij(1)γ st(2), for i < j, s < t, i, j = 1,…,#L(P)(k), s,t = 1,…,#L(P)(2), then ĒF(L(P)(1)) ≤ ĒF(L(P)(2)), and ĒH(L(P)(1)) ≤ ĒH(L(P)(2)), and thus ĒT(L(P)(1)) ≤ ĒT (L(P)(2)).

  4. (iv)

    It is obvious that ĒT (L(P)) = ĒT (L(P)c).

This completes the proof.

From the above property, we know that the total entropy introduced here is more general than the one in Ref.23 since the probability of each linguistic term may not be equal. Additionally, the total entropy introduced here has more desirable properties than the one in Ref.23 regarding the probability information.

By considering the conditions in Definition 6, and the properties of the fuzzy entropy and the hesitant entropy, we can construct the total entropy as ĒT (L(P)) = ψ(ĒF(L(P)), ĒH(L(P)), where ψ: [0,1] × [0,1] → [0,1]. Note that the fuzzy entropy and the hesitant entropy can be viewed as two parallel concepts for PLTSs, thus they are commutative in the definition of the total entropy. Considering this notice and the properties of the fuzzy entropy and the hesitant entropy, the conditions in Definition 6 reduce to the following properties of the function ψ:

  1. (i)

    ψ(0,0) = 0;

  2. (ii)

    ψ(1,0) = ψ(0,1) = 1;

  3. (iii)

    ψ(x,y) = ψ(y,x);

  4. (iv)

    ψ(x,y) is monotone increasing with respect to x and y.

Since the fuzzy entropy and the hesitant entropy of PLTSs are invariant with respect to a PLTS and its complement, the last requirement of the total entropy means that ψ(x,y) = ψ(x,y), which is trivial. Additionally, if one of the fuzzy entropy and the hesitant entropy approaches 0, then the total entropy will approach the other entropy. That is to say, ψ(x,y) → x if y → 0, and ψ(x,y) → y if x → 0.

Let us consider a special case that ĒF(L(P)) → 1, ĒH(L(P)) → 1. For the simple case that L(P) = {l1(p1),l2(p2)}, we have

E¯F(L(P))=p1EF(α1)+p2EF(α2)1.

Since p1 + p2 = 1, then we know that EF (α1), EF(α2) → 1, and thus l1,l2sg/2. On the other hand, since ĒH(L(P)) = 4p1p2 f (γ12) → 1, we have p1, p2 → 0.5, and f(γ12) → 1, thus l1s0, l2sg, which contradicts with l1, l2sg/2. Therefore, the value ψ(1,1) has no practical meaning and it can be left undefined. To be convenient for the denotation of the function ψ, ψ(1,1) can also be set as any value in [0,1], for example as the value 1. The assumption will not influence the rationality of the function.

It is interesting that the triangular co-norm3 satisfies all the conditions of the total entropy. For simplicity, we only give the following commonly-used triangular co-norms:

(i).ψ1(x,y)=max(x,y);
(ii).ψ2(x,y)=x+yxy;
(iii).ψ3(x,y)=min(x+y,1);
(iv).ψ4(x,y)={ (x,y)(0,1]×(0,1]max(x,y),otherwise

By replacing x with ĒF(L(P)), and y with ĒH(L(P)), we can obtain the expression of the total entropy. The corresponding total entropy generated from the function ψl(x,y) is denoted as E¯ Tl(L(P)), l = 1,2,3,4. Similarly, we can also use the convex combination of them as the new total entropy as

E¯T(L(P))=i=14λiψi(E¯F(L(P)),E¯H(L(P))),
where λi ∈ [0,1], i = 1,…,4, i=14λi=1. The arithmetic mean is obtained in the case that λi = 1/4, i = 1,…,4.

4. Multi-criteria decision making model based on PLTSs

In this paper, we consider the linguistic multicriteria decision making problem that consists of a finite set of alternatives A = {a1,…,am}, and a set of criteria C = {c1,…,cn} with the weighting vector W = (w1,…,wn) being completely unknown. A group of decision makers provide their assessments of each alternative with respect to the criteria, and each element of the collective assessment matrix is composed of several linguistic terms, each of which has a probability that is generated by the frequency of that term in the group opinions. After normalization, an assessment matrix based on PLTSs is obtained as L(kl)(P(kl))=(l i(kl)(p i(kl)))m×n, where l i(kl)S,i=1,,#L(kl)(P(kl)) denotes the ith assessment of the alternative ak under the criterion cl, with the probability p i(kl).

In the following, we introduce the resolution process based on the fuzzy TOPSIS method16.

Step 1. Compute the average total entropy under each criterion cl, l = 1,…,n as:

E¯T(cl)=1mk=1mE¯T(L(kl)(P(kl))).

Step 2. Compute the weights of the criteria as

wl=1E¯T(cl)k=1m(1E¯T(cl)),l=1,,n.

The idea of the above method lies in the fact that the less uncertainties contained in the assessments under the criterion, the more important of the criterion, and thus the bigger weight it should have.

Step 3. Calculate the fuzzy positive ideal solution c l+={sg(1)}, and the fuzzy negative ideal solution c l={s0(1)}. Then the normalized distance between each alternative and the fuzzy positive ideal solution or the fuzzy negative ideal solution can be obtained as (k = 1,…,m)

d+(ak)=l=1nwld(L(kl)(P(kl)),c l+),
d(ak)=l=1nwld(L(kl)(P(kl)),c l).

Step 4. Compute the closeness coefficient (CC) of each alternative as

CCk=d(ak)d+(ak)+d(ak).

Step 5. Rank the alternatives according to their CCs and select the biggest one as the solution.

Remark 1.

By simple computation in Step 3, we have d+(ak) + d−(ak) = 1. Actually,

d+(ak)+d(ak)=l=1nwl(1E(L(kl)(P(kl)))/g+E(L(kl)(P(kl)))/g)=l=1nwl=1.

Therefore, in Step 4 we have CCk = d (ak), and Step 4 can be omitted.

5. An illustrative example

For convenience of comparison, in this section we adopt the illustrative example in Ref.12. The directors of a company want to invest on three projects A = {a1,a2,a3}, and each project is evaluated from four criteria C = {c1,c2,c3,c4} based on a linguistic term set {s0,s1,…,s8} by using the Balanced Scorecard method with the criteria weights being completely unknown. After collecting the assessments of the directors and a normalization process is further conducted, the following assessment matrix based on PLTSs is obtained as in Table 3.

c1 c2 c3 c4
a1 {s3(0.4),s4(0.6)} {s2(0.2),s4(0.8)} {s3(0.2),s4(0.8)} {s3(0.4),s5(0.6)}
a2 {s3(0.8),s5(0.2)} {s2(0.25),s3(0.5),s4(0.25)} {s1(0.25),s2(0.5),s3(0.25)} {s3(0.8),s4(0.2)}
a3 {s3(0.6),s4(0.4)} {s3(0.75),s4(0.25)} {s3(0.33),s4(0.33),s5(0.33)} {s4(0.8),s6(0.2)}
Table 3.

The normalized assessment matrix12.

The problem is solved by the following steps in the previous section.

Step 1. Compute the average total entropy of each criterion. There are six formulas of the fuzzy entropy, six formulas of the hesitant entropy, and four formulas of the total entropy introduced in this paper. Therefore, we have 6 × 6 × 4 = 144 different combinations of the total entropy. Since some entropy measures contain parameters, there are actually infinite formulas of total entropy, and we cannot explore all of the cases. For simplicity, we adopt the arithmetic mean of the fuzzy entropy, the hesitant entropy and the total entropy expressed by Eqs. (10), (21) and (26) respectively.

Assume that the parameters in the entropy measures are given as q = 1 in E¯ F2, q = 0.5 in E¯ F5, q = 1 in E¯ F6, r = 1 in E¯ H1, a = 3 in E¯ H5. The results are shown in Table 4.

ĒF ĒH ĒT W
c1 0.9294 0.1453 0.9673 0.1736
c2 0.9006 0.1600 0.9543 0.2425
c3 0.8355 0.1780 0.9242 0.4022
c4 0.9249 0.1757 0.9657 0.1817
Table 4.

The computation results of the entropy measures and the criteria weights.

Step 2. Compute the weighting vector W of the criteria and the result is shown in Table 4.

Step 3. Calculate the distance between each alternative and the fuzzy positive ideal solution or fuzzy negative ideal solution as (d(a1), d(a2), d(a3)) = (0.4737, 0.3379,0.4733).

Step 4. Rank the alternatives according to their CCs as a1a3a2, and the best choice is a1.

It can be seen that our result produces the same ranking as the extended TOPSIS method and the aggregation method in Ref.12. In fuzzy TOPSIS method, the ranking of alternatives is determined solely by the weights of criteria when the assessments of each alternative under the criteria are fixed. Although the criteria weights of our method and Ref.12 are different, they have a common characteristic that the ranking of the criteria weights is the same, i.e., w3 > w2 > w4 > w1. Therefore, the reason for the same ranking of alternatives lies in the same ranking of criteria weights. The Ref.12 utilized the optimization model by maximizing the deviation of the weighted assessments of alternatives under each criterion to compute the criteria weights. Our method seems to be more direct and flexible, which computes the criteria weights by using the entropy measure of the assessments under each criterion. Such entropy measures deal with PLTSs in a comprehensive way by considering the probability, the fuzziness and the hesitation contained in the PLTSs, and different forms of entropy can be selected by different decision makers.

6. Conclusions

In the situation that includes both hesitation and probability information, the PLTSs serve as a good tool to consider both information. The PLTSs contain probability, fuzziness and hesitation information, which can be measured by fuzzy entropy and hesitant entropy respectively, and both entropy measures can be combined into the total entropy. This paper discusses different forms of such entropy measures and applies them in the multi-criteria decision making. In the future, more forms of the entropy measures will be further investigated, and new applications of the entropy will be studied.

Acknowledgments

The authors are very grateful to the Editor and the anonymous reviewers for their constructive comments and suggestions that have helped to improve the quality of this paper. This work is supported by the National Natural Science Foundation of China (71571123); the Key Scientific Research Funds of Henan Provincial Department of Education (15A630011, 16A630038, 17A120006); the Doctoral Research Start-up Funding Project of Zhengzhou University of Light Industry and Henan University of Economics and Law (BSJJ2013053, 800234); the Pre-Research Funds on National Key Projects of Henan University of Economics and Law (852014).

References

8.Z Hao, Z Xu, H Zhao, and Z Su, Probabilistic dual hesitant fuzzy set and its application in risk evaluation, Knowledge-Based Systems, 2017.
23.CP Wei, RM Rodríguez, and L Martínez, Uncertainty measures of extended hesitant fuzzy linguistic term sets, IEEE Transactions on Fuzzy Systems, 2017.
30.Z Xu and W Zhou, Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment, Fuzzy Optimization & Decision Making, 2016.
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
11 - 1
Pages
45 - 57
Publication Date
2018/01/01
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.11.1.4How to use a DOI?
Copyright
© 2018, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Hongbin Liu
AU  - Le Jiang
AU  - Zeshui Xu
PY  - 2018
DA  - 2018/01/01
TI  - Entropy Measures of Probabilistic Linguistic Term Sets
JO  - International Journal of Computational Intelligence Systems
SP  - 45
EP  - 57
VL  - 11
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.11.1.4
DO  - 10.2991/ijcis.11.1.4
ID  - Liu2018
ER  -