[go: up one dir, main page]

Next Article in Journal
Application of Exergy-Based Fault Detection in a Gas-To-Liquids Process Plant
Previous Article in Journal
Exergy Analysis of the Heart with a Stenosis in the Arterial Valve
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seasonal Entropy, Diversity and Inequality Measures of Submitted and Accepted Papers Distributions in Peer-Reviewed Journals

1
School of Business, College of Social Sciences, Arts, and Humanities, University of Leicester, Leicester LE2 1RQ, UK
2
Department of Statistics and Econometrics, Bucharest University of Economic Studies, Calea Dorobantilor 15-17, 010552 Sector 1 Bucharest, Romania
3
Group of Researchers for Applications of Physics in Economy and Sociology (GRAPES), Rue de la belle jardinière, 483, Angleur, B-4031 Liège, Belgium
4
Institute for the Application of Nuclear Energy (INEP), University of Belgrade, 11080 Belgrade, Serbia
5
Institute of Chemistry, Technology and Metallurgy, Department of Electrochemistry, University of Belgrade, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(6), 564; https://doi.org/10.3390/e21060564
Submission received: 29 March 2019 / Revised: 26 May 2019 / Accepted: 27 May 2019 / Published: 4 June 2019

Abstract

:
This paper presents a novel method for finding features in the analysis of variable distributions stemming from time series. We apply the methodology to the case of submitted and accepted papers in peer-reviewed journals. We provide a comparative study of editorial decisions for papers submitted to two peer-reviewed journals: the Journal of the Serbian Chemical Society (JSCS) and this MDPI Entropy journal. We cover three recent years for which the fate of submitted papers—about 600 papers to JSCS and 2500 to Entropy—is completely determined. Instead of comparing the number distributions of these papers as a function of time with respect to a uniform distribution, we analyze the relevant probabilities, from which we derive the information entropy. It is argued that such probabilities are indeed more relevant for authors than the actual number of submissions. We tie this entropy analysis to the so called diversity of the variable distributions. Furthermore, we emphasize the correspondence between the entropy and the diversity with inequality measures, like the Herfindahl-Hirschman index and the Theil index, itself being in the class of entropy measures; the Gini coefficient which also measures the diversity in ranking is calculated for further discussion. In this sample, the seasonal aspects of the peer review process are outlined. It is found that the use of such indices, non linear transformations of the data distributions, allow us to distinguish features and evolutions of the peer review process as a function of time as well as comparing the non-uniformity of distributions. Furthermore, t- and z-statistical tests are applied in order to measure the significance (p-level) of the findings, that is, whether papers are more likely to be accepted if they are submitted during a few specific months or during a particular “season”; the predictability strength depends on the journal.

1. Introduction

Authors who submit (by their own assumption) high quality papers to scholarly journals, are interested in knowing if there are factors which may increase the probability that their papers be accepted. One such factor may be related to the month or day of submission, as recently discussed [1]. Indeed, authors might wonder about editors’ and reviewers’ overload at some times of the year. Moreover, the number of submitted papers is relevant for editors and publishers handling machines to the point that artificial intelligence can be useful for helping journal editors [2,3]. More generally, informetrics and bibliometrics are also interested in manuscript submission timing, especially in light of an enormous increase in the number of electronic journals.
From the author’s point of view, rejection is often frustrating, be it due to an “editor desk rejection” or following a review process. A high editor desk rejection rate has sometimes been explained as due to an entrance barrier editor load effect [4]. Thus, it is of interest to observe whether there is a high probability of submission during specific months or seasons. In fact, non uniform submission has already been studied. However, the acceptance distribution, during a year, that is, a “monthly bias”, is rarely studied, because of publisher secrecy. Search engines do not provide any information at all on the timing of rejected papers.
Interestingly, Boja et al. [1] recently examined a large database of journals with high impact factors and reported that a day of the week correlation effect occurs between “when a paper is submitted to a peer-reviewed journal (and) whether that paper is accepted”. However, there was no study of rejected papers because of a lack of data, therefore one may wonder whether, besides a “day of the week” effect, there is some “seasonal” effect. One may indeed imagine that researchers in academic surroundings do not have a constant occupation rate due to teaching classes, holidays, congresses, and even budgetary conditions. Researchers have only specific times during the academic year for producing research papers.
From the “seasonal effect” point view, Shalvi et al. [5] found a discrepancy in the pattern of “submission-per-month” and “acceptance-per-month” for Psychological Science ( P S ) but not for Social Psychology Bulletin ( P S P B ). Summer months inspired authors to submit more papers to P S but the subsequent acceptance was not related to the effect of seasonal bias (based on a χ ( 11 ) 2 test for percentages). On the other hand, a very low rate of acceptance was recorded for manuscripts sent in November or December. The number of submissions to P S P B , on the contrary, was the greatest during winter months, followed by a reduced “production” in April; however, the rate of acceptance was the highest for papers submitted in the period from August to October. Moreover, a significant “acceptance success dip” was noted for submissions made in winter months. One of the main reasons for such differences between journals was conjectured to lie in different rejection policies; some journals employ desk rejection, whereas others do not.
Schreiber [4] analysed the acceptance rate of a journal—Europhysics Letters ( E P L )—for a period of 12 years and found that the rate of manuscript submission exceeded the rate of their acceptance. The data revealed (Table 2 in [4]) that there is a maximum number of submissions in July, defined as a 10% increase compared to the annual mean, together with a minimum in February, even taking into account the “shorter length” of this month. He concluded that significant fluctuations exist between months. The acceptance rate ranged from 45% to 55%; the highest acceptance rate was seen in July and the lowest in January, in the most recent years.
Recently, Ausloos et al. [6] studied submission and also subsequent acceptance data for two journals, a specialized (chemistry) scientific journal and a multidisciplinary journal, respectively, i.e., the Journal of the Serbian Chemical Society (JSCS) (http://shd.org.rs/JSCS/) and Entropy (https://www.mdpi.com/journal/entropy), each over a 3 year time interval. The authors found that fluctuations, expectedly, occur: the number of submissions to J S C S is the greatest in July and September and the smallest in May and December. The highest rate of paper submission for E n t r o p y was noted in October and December and the lowest in August. Concerning acceptance for J S C S , the proportion of accepted/submitted manuscripts is the greatest in January and October. Concerning acceptance for E n t r o p y , the number of papers steadily increase from January to a peak in May, followed by a marked dip during summer time, before reaching a peak in October of the order of the May peak.
Concerning the number of submitted manuscripts, it was observed that the acceptance rate in J S C S was the highest if papers were submitted in January and February; it was significantly lower if the submission occurred in December. In the case of E n t r o p y , the highest rejection rate was for papers submitted in December and March, thus with a January-February peak; the lowest acceptance rate was for manuscripts submitted in June or December; the highest rate being for those sent in spring months, February to May. One recognizes a journal-dependent seasonal shift of the features. Notice that we adapt the word “seasonal”; even though changes in seasons occur on the 21st of various months, we approximate the season transition as occurring on the next 1st day of the following month.
Here, we propose another line of approach in order to study the submission, acceptance, and rejection (number and rate) diversity based on probabilities, with emphasis on the conditional probabilities, thereafter measuring the entropy and other characteristics of the distributions. Indeed, the entropy is a measure of disorder, and one of several ways to measure diversity. Researchers have their own preference [7,8] in measuring diversity. Here below, we practically adapt the classical measure of diversity, as used in ecology, but other cases of interest pertaining to information science [9,10] can be mentioned.
Let us recall that the general equation of diversity is often written in the form [11,12]
q D = [ i = 1 N p i q ] 1 / ( 1 q )
in which p i = [ z i / i z i ] , and z i the measured variable. For q = 1 , q D reduces to the exponential of the Shannon entropy [13,14]
1 D = e x p [ i = 1 N p i l n ( p i ) ] ,
to which we will only stick here.
Several inequality measures are commonly used in the literature: in the class of entropy related measures, one finds the exponential entropy [15], which measures the extent of a distribution, and the Theil index [16] which emerges as the most popular one [17,18], besides the Herfindahl- Hirschman index [19], measuring “concentrations.” “Finally,” upon ranking according to their size the measured variable, the Gini coefficient [20], is a classical indicator of non-uniform distributions.
The Theil index [16] is defined by
T h = 1 N i = 1 N z i i z i ln z i i z i .
It seems obvious that the Theil index can be expressed in terms of the negative entropy
H = i = 1 N z i i z i ln z i i z i
indicating the deviation from the maximum disorder entropy, l n ( N ) ,
H = l n ( N ) T h o r T h = l n ( N ) H .
The exponential entropy [15] is
E = e x p ( H ) = Π i = 1 N p i p i .
The Herfindahl–Hirschman index (HHI) [19] is an indicator of the “concentration” of variables, the “amount of competition” between the months, here. The higher the value of HHI, the smaller the number of months with a large value of (submitted, or accepted, or accepted if submitted) papers in a given month. Formally, adapting the HHI notion to the present case,
H H I = i = 1 N z i i z i 2 .
Notice that H H I = i = 1 N p i 2 .
The Gini coefficient G i [20] has been widely used as a measure of income [21] or wealth inequality [22,23]; nowadays, it is widely used in many other fields. In brief, defining first the Lorenz curve L ( r ) as the percentage contributed by the bottom r of the variable population to the total value r z r of the measured (and now ranked) variable z r , i.e., p r = [ z r / r z r ] , one obtains the Gini coefficient as twice the area between this Lorenz curve and the diagonal line in the [ r , L ( r ) ] plane; such a diagonal represents perfect equality; whence, G i = 0 corresponds to perfect equality of the z r variables.
Having set up the framework and presented the definition of the indices to be calculated, we indicate quantities of interest and turn to the data and data analysis, in Section 2 and Section 3, respectively. Their discussion and comments on the present study, together with a remark on its limitations, are found in the conclusion Section 4.

2. Definitions

In order to develop the method measuring the disorder of the time series, let us recall the necessary data. The raw data can be found in Reference [6]. For completeness, let the time series of submitted and of accepted papers if submitted during a given month to J S C S and to E n t r o p y be recalled through Figure A1 for the years in which the full data is available, that is, for which the final decisions have been made on the submitted papers.
Let us introduce notations:
  • the number of monthly submissions in a given month ( m = 1 , , 12 ) in year (y) is called N s ( m , y )
  • the percentage of this set is the probability of submission in a given month for a specific year q s ( m , y ) = N s ( m , y ) / m N s ( m , y )
  • similarly, one can define N a ( m , y ) , as being the number of accepted papers when submitted in year (y) in a specific month (m),
  • and for the related percentage, one has q a ( m , y ) = N a ( m , y ) / m N a ( m , y ) ;
  • more importantly, for authors, the (conditional) probability of a paper acceptance when submitted in a given month may be considered and estimated before submission
    p ( a | s ) ( m , y ) = N a ( m , y ) / N s ( m , y )
Thereafter, one can deduce the relevant “monthly information entropies”
  • S s ( m , y ) = q s ( m , y ) l n ( q s ( m , y ) )
  • S a ( m , y ) = q a ( m , y ) l n ( q a ( m , y ) )
  • S ( a | s ) ( m , y ) = p ( a | s ) ( m , y ) l n ( p ( a | s ) ( m , y ) )
and the overall information entropy:
  • S s ( y ) = m S s ( m , y )
  • S a ( y ) = m S a ( m , y )
  • S ( a | s ) ( y ) = m S ( a | s ) ( m , y )
in order to pin point whether the yearly distributions are disordered.
Moreover, we can discuss the data by not only comparing different years, but also the cumulated data per month in the examined time interval as if all years are “equivalent”:
  • C s ( m ) = y N s ( m , y ) , from which one deduces
  • q s ( m ) = C s ( m ) / m C s ( m )
  • and similarly for the accepted papers C a ( m ) = y N a ( m , y ) , and
  • q a ( m ) = C a ( m ) / m C a ( m )
  • leading to the ratio between cumulated monthly data
    q ( a | s ) ( m ) = C a ( m ) / C s ( m ) ,
  • and to the corresponding “monthly cumulated entropy”, S ( a | s ) ( m ) = q ( a | s ) ( m ) l n ( q ( a | s ) ( m ) ) ,
  • finally to S ( a | s ) = m S ( a | s ) ( m )
which will be called the “conditional entropy”.
Relevant values are given in Table 1, Table 2, Table 3 and Table 4 both for J S C S and for E n t r o p y . The diversity and the inequality index values are given in Table 5. Most of the results stem from the use of a free online software [24].

3. Data Analysis

3.1. Data

First, notice that the 3-year long time series is not in itself part of the main aim of the paper; this is because we intend to compare data with an equivalent number of degrees of freedom, that is, 11, for all studied cases. Nevertheless, for completeness and in order not to distract readers from our framework, we provide relevant figures in the Appendix A, together with a note on the corresponding discrete Fourier transform. A short note, in the Appendix, recalls the meaning of the (p-) significance level.

3.2. Analysis

The relevant values for the various indices, given in Table 1, Table 2, Table 3 and Table 4, both for J S C S and for E n t r o p y , serve the following analysis. We consider 3 aspects: (i) a posteriori features findings, (ii) non-linear entropy indices, and (iii) forecasting aspects.

3.2.1. A posteriori features findings

Browsing through Table 1, it can be noticed that the distribution of probabilities of submissions is weaker during the February-May months for J S C S , but is rather high for the fall and winter months. For E n t r o p y , the highest probability of submissions also occurs in October-December, and is preceded by a low rate of submissions, the lowest being in February and in August, should one say at vacation times. Let us recall that the extremum entropy (for “perfect disorder”) is here l n ( 12 ) 2.4849 .
Apparently this submission evolution pattern is reflected—see Table 2— in the acceptance rate, except for J S C S which has a low acceptance rate for papers submitted in winter 2014. For E n t r o p y , the weaker acceptance rate occurs for papers submitted during the August–September months, say the end of summer time.
Statistical tests, for example, χ 2 , can be provided to ensure the validity of these findings for percentages, but taking into account the number of observations. In all cases, such a test demonstrates that the distributions are far from uniform, suggesting looking further for the major deviations. See a discussion of other texts in Section 3.2.3.
However, q a ( m , y ) values only measure the probability of monthly acceptances without considering the number of submissions in a given month. It is in this respect more appropriate to look at the conditional probabilities, q ( a | s ) ( m ) , as in Table 3. For J S C S , the highest values of q ( a | s ) ( m ) are found for winter months: q ( a | s ) ( m ) has a notable maximum in January and the lowest for spring-summer time, from March till August. There is a shift of such a pattern for E n t r o p y : the highest conditional probabilities occur during spring time, except in 2016.
The corresponding values of the monthly entropy, for the given years and for the cumulated distributions, are found in Table 4. All values of the entropy are remarkably 4.1 , both for J S C S and E n t r o p y , suggesting some sort of universality. One can notice that the entropy steadily increases as a function of time both for J S C S and E n t r o p y , the growth rate being about twice as large for the latter journal. This is somewhat slightly surprising since one should expect an averaging effect in the case of E n t r o p y because of the multidisciplinarity of the topics involved. Comparing such values indicates that the distributions are far from uniform (The slight difference between the last lines of Table 3 and Table 4, displaying the “conditional entropy” is merely due to rounding errors.) indeed.

3.2.2. Non-Linear Entropy Indices

The diversity and inequality measures are given in Table 5. The diversity index 1 D is remarkably similar for both journals (∼11) for the submitted papers and accepted papers distributions. The similarity holds also for the HHI 0.087 , although a little bit lower for the E n t r o p y journal 0.085 . The diversity index for the conditional probability distributions is however rather different: both increase as a function of time, indicating an increase in concentrations in favor of relevant months. This increase rate is much higher for E n t r o p y than for J S C S .
The inequality between months is rather low, as seen in the Gini coefficient; there is a weak inequality between months. However, there is a factor ∼2 in favor of J S C S , which we interpret as being due to the greater specificity of J S C S , implying a smaller involved community and specially favored topics. This numerical observation reinforces what can be deduced from the Theil index, whence inducing the same conclusion.

3.2.3. Forecasting Aspects

Considering the rather small sizes of both samples (not our fault!), it is of interest to discuss the significance of the findings, in some sense in view of suggesting some “strategy” after the “diagnosis”. The notions of “false positives” and “false negatives”, as in medical testing, can be applied in our framework.
In brief, a ”false positive” occurs as an error when a test result improperly indicates the presence (high probability) of an outcome, when in reality it is not present; obviously, a contrario—a “false negative”—is an error in which a test result improperly indicates no presence of a condition (the result is negative), when in reality it is present. This corresponds to rejecting (or accepting) a null hypothesis, for example, in econometrics. Thus, two statistical tests have been used for such a discussion: (i) the t Student test and (ii) the z-test. Recall that they are used if one either does not know or one knows the variance (or standard deviation) of the sample and test distributions. Such characteristics are given in Table 1, Table 2, Table 3 and Table 4 for each relevant quantity.
For completeness, one has also given the confidence interval [ μ 2 σ , μ + 2 σ ]. It is easily seen that there is no outlier. This observation would lead us, like other authors, to claim that there is no anomaly in the monthly numbers and subsequent percentages, in contradistinction with the χ 2 values and tests. We should here point out that the t-Student test leads to a p-value < 0.0001, a quite significant result. Concentrating our attention on the (monthly and annual) conditional probabilities N a / N s , the z-test gives the significance reported in Table 4. The values (so called α , or error of type I) in hypothesis testing, indicate that the correct conclusion is to reject the null hypothesis and to consider the existence of “false positives”. This is essentially due to the sample size. It is remarkable that the order of magnitude differs for J S C S and for E n t r o p y .

4. Conclusions

The data on the number of submitted papers is relevant for editors and, more so nowadays, for publishers due to the automatic handling of papers. The relative number of accepted papers is less significant in that respect, but the conditional probability of having an accepted paper if it is submitted in a given month is very relevant for authors. Authors expect a fast and (hopefully) positive response from journals as they are probably interested to discover the best timing for their submission in order to avoid possible editor overload and a negative effect in a particular moment. For these authors, the possible seasonal bias issue is expected to be relevant, as they would like to know whether a specific month of submission will increase the chance that their paper will be accepted. Thus, the probability of acceptance, the so called “acceptance rate,” is the relevant variable to be studied. Instead of χ 2 tests or observing the “confidence interval” on monthly distributions, we have proposed a new line of approach: considering the diversity and inequality in the distributions of papers submitted, accepted, or accepted if submitted in a given month through information indices, like the Shannon entropy [25], the diversity index, the Gini coefficients and the Herfindahl–Hirschman index.
From these case studies, a seasonal bias seems stronger in the specialized ( J S C S ) journal. The features are emphasized because we use a non linear transformation of the data, through information concepts, having their usefulness demonstrated in many other fields [26]. In the present cases, the seasonal bias effects are observed. The overall significance and the universality features might have to be re-examined if more data were available. Indeed, the p-values (so-called α , or error of type I) in hypothesis testing, indicate that the correct conclusion is to consider the existence of “false positives”.
Our outlined findings suggest intrinsic behavioral hypotheses for future research. Complementary aspects must be used as ingredients in order to understand whether some seasonal bias occurs [27,28]. One has to take into account the scientific work environment, besides the journal favored topics.

Author Contributions

All authors (M.A., O.N. and A.D.) equally contributed their best to all aspects of this paper; conceptualization, methodology; formal analysis; investigation; resources; data curation; writing–original draft preparation; writing revised version, and editing; visualization.

Funding

This research received no external funding.

Acknowledgments

M.A. greatly thanks the MDPI Entropy Editorial staff for gathering and cleaning up the raw data, and in particular Yuejiao Hu, Managing Editor. Thanks also go to the reviewers and E n t r o p y editor.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Series Data

The time series of submitted and accepted papers if submitted during a given month to J S C S and to E n t r o p y are given in Figure A1. The distributions are markedly non-uniform. Nevertheless, with such a short series, one can observe some periods more important than others. One can also observe that E n t r o p y , a rather new journal, is attracting more submissions since 2015, and has an increased rejection rate. Some “parallelism” in the numbers of submitted and accepted if submitted papers in a given month seems apparent for J S C S .
Figure A1. Number of papers submitted and number of papers accepted if submitted, during a given month, to J S C S and to E n t r o p y , in the examined 36 months of the 3-year time interval, [2012–2014] and [2014–2016], respectively.
Figure A1. Number of papers submitted and number of papers accepted if submitted, during a given month, to J S C S and to E n t r o p y , in the examined 36 months of the 3-year time interval, [2012–2014] and [2014–2016], respectively.
Entropy 21 00564 g0a1
The two largest amplitudes of frequency f in M o n t h 1 , or (periods), resulting from a Fourier analysis of the 3-year time series for N s papers submitted or N a accepted if submitted during a given month to J S C S and E n t r o p y are given in Table A1. The year period is, in 3 cases, one of the two most important ones; the trimester period is the most important for submitted papers to J S C S , and the next largest for N a to J S C S , indicating the more relevant timing for the journal, more prone toward academic authors than E n t r o p y .
Table A1. The two largest amplitudes of frequency f in M o n t h 1 , or (periods), resulting from a Fourier analysis of the 3-year time series for papers N s submitted or N a accepted if submitted during a given month to J S C S and E n t r o p y , as displayed in Figure A1.
Table A1. The two largest amplitudes of frequency f in M o n t h 1 , or (periods), resulting from a Fourier analysis of the 3-year time series for papers N s submitted or N a accepted if submitted during a given month to J S C S and E n t r o p y , as displayed in Figure A1.
JSCSEntropy
N s f N a f N s f N a f f
1125.420.3333 (3)66.830.0833 (12)720.230.0278 (36)169.360.0556 (18)
294.940.3889 (2.57)51.110.3333 (3)378.380.0833 (12)164.150.0833 (12)
Computational notes
This procedure calculates the difference of an observed mean with a hypothesized value. A significance value (p-value) and 95% Confidence Interval (CI) of the observed mean are reported. The p-value is the probability of obtaining the mean observed for the sample if the null hypothesis holds true.
The p-value is calculated using the one sample t-test, with t calculated as:
t = μ k σ / N
where the hypothesized mean is k and the standard deviation σ . In the present context, the hypothesized mean corresponds to that of the uniform distribution. Recall that the p-value is the area of the t distribution, which for N 1 degrees of freedom falls outside ± t .

References

  1. Boja, C.E.; Herţeliu, C.; Dârdală, M.; Ileanu, B.V. Day of the week submission effect for accepted papers in Physica A, PLOS ONE, Nature and Cell. Scientometrics 2018, 117, 887–918. [Google Scholar] [CrossRef]
  2. Mrowinski, M.J.; Fronczak, A.; Fronczak, P.; Nedic, O.; Ausloos, M. Review times in peer review: Quantitative analysis and modelling of editorial work flows. Scientometrics 2016, 107, 271–286. [Google Scholar] [CrossRef] [PubMed]
  3. Mrowinski, M.J.; Fronczak, P.; Fronczak, A.; Ausloos, M.; Nedic, O. Artificial intelligence in peer review: How can evolutionary computation support journal editors? PLoS ONE 2017, 12, e0184711. [Google Scholar] [CrossRef] [PubMed]
  4. Schreiber, M. Seasonal bias in editorial decisions for a physics journal: You should write when you like, but submit in July. Learn. Publ. 2012, 25, 145–151. [Google Scholar] [CrossRef]
  5. Shalvi, S.; Baas, M.; Handgraaf, M.J.J.; De Dreu, C.K.W. Write when hot—Submit when not: Seasonal bias in peer review or acceptance? Learn. Publ. 2010, 23, 117–123. [Google Scholar] [CrossRef]
  6. Ausloos, M.; Nedič, O.; Dekanski, A. Correlations between submission and acceptance of papers in peer review journals. Scientometrics 2019, 119, 279–302. [Google Scholar] [CrossRef] [Green Version]
  7. Marhuenda, Y.; Morales, D.; Pardo, M.C. A comparison of uniformity tests. Statistics 2005, 39, 315–327. [Google Scholar] [CrossRef]
  8. Alizadeh Noughabi, H.A. Entropy-based tests of uniformity: A Monte Carlo power comparison. Commun. Stat. Simul. Comput. 2017, 46, 1266–1279. [Google Scholar] [CrossRef]
  9. Rousseau, R. Concentration and diversity of availability and use in information systems: A positive reinforcement model. J. Am. Soc. Inf. Sci. 1992, 43, 391–395. [Google Scholar] [CrossRef]
  10. Leydesdorff, L.; Rafols, I. Indicators of the interdisciplinarity of journals: Diversity, centrality, and citations. J. Inf. 2011, 5, 87–100. [Google Scholar] [CrossRef] [Green Version]
  11. Hill, M.O. Diversity and evenness: A unifying notation and its consequences. Ecology 1973, 54, 427–432. [Google Scholar] [CrossRef]
  12. Jost, L. Entropy and diversity. Oikos 2006, 113, 363–375. [Google Scholar] [CrossRef]
  13. Shannon, C. A mathematical theory of communications. Bell. Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  14. Shannon, C. Prediction and entropy of printed English. Bell. Syst. Tech. J. 1951, 30, 50–64. [Google Scholar] [CrossRef]
  15. Campbell, L.L. Exponential entropy as a measure of extent of a distribution. Probab. Theory Relat. Fields 1966, 5, 217–225. [Google Scholar] [CrossRef]
  16. Theil, H. Economics and Information Theory; Rand McNally and Company: Chicago, IL, USA, 1967. [Google Scholar]
  17. Beirlant, J.; Dudewicz, E.J.; Györfi, L.; Van der Meulen, E.C. Nonparametric entropy estimation: An overview. Int. J. Math. Stat. Sci. 1997, 6, 17–39. [Google Scholar]
  18. Oancea, B.; Pirjol, D. Extremal properties of the Theil and Gini measures of inequality. Qual. Quant. 2019, 53, 859–869. [Google Scholar] [CrossRef]
  19. Hirschman, A.O. The paternity of an index. Am. Econ. Rev. 1964, 54, 761–762. [Google Scholar]
  20. Gini, C. Índice di Concentrazione e di Dipendenza. Biblioteca dell’Economista, Serie V; Utet Torino: Turin, Italy, 1910; Volume XX, English translation in Riv. Politica Econ. 1997, 87, 769–789. [Google Scholar]
  21. Atkinson, A.B.; Bourguignon, F. (Eds.) Handbook of Income Distribution; Elsevier: Amsterdam, The Netherlands, 2014; Volume 2. [Google Scholar]
  22. Cerqueti, R.; Ausloos, M. Statistical assessment of regional wealth inequalities: The Italian case. Qual. Quant. 2015, 49, 2307–2323. [Google Scholar] [CrossRef]
  23. Cerqueti, R.; Ausloos, M. Socio-economical Analysis of Italy: The case of hagiotoponym cities. Soc. Sci. J. 2015, 52, 561–564. [Google Scholar] [CrossRef]
  24. Wessa, P. Free Statistics Software, Office for Research Development and Education, Version 1.1.23-r. 2014. Available online: http://www.wessa.net/ (accessed on 4 June 2019).
  25. Crooks, G.E. On Measures of Entropy and Information. Tech. Note 2017, 9, v7. [Google Scholar]
  26. Clippe, P.; Ausloos, M. Benford’s law and Theil transform of financial data. Phys. A 2012, 391, 6556–6567. [Google Scholar] [CrossRef]
  27. Nedić, O.; Drvenica, I.; Ausloos, M.; Dekanski, A. Efficiency in managing peer-review of scientific manuscripts-editors’ perspective. J. Serb. Chem. Soc. 2018, 83, 1391–1405. [Google Scholar] [CrossRef]
  28. Drvenica, I.; Bravo, G.; Vejmelka, L.; Dekanski, A.; Nedić, O. Peer Review of Reviewers: The Author’s Perspective. Publications 2019, 7, 1. [Google Scholar] [CrossRef]
Table 1. Number of papers N s ( y ) and monthly percentage q s ( m , y ) of papers submitted in a given year (y) and month (m), respectively to JSCS in 2012, 2013, and 2014, and to E n t r o p y in 2014, 2015, and 2016; q s ( m ) is obtained after summing the events of each year for a given month, i.e., from C s ( m ) ; last lines: χ 2 and entropy, mean, standard deviation, confidence interval, and t-test with significance level; recall that l n ( 12 ) 2.4849 and χ 11 2 ( 0.95 % ) = 4.5748 .
Table 1. Number of papers N s ( y ) and monthly percentage q s ( m , y ) of papers submitted in a given year (y) and month (m), respectively to JSCS in 2012, 2013, and 2014, and to E n t r o p y in 2014, 2015, and 2016; q s ( m ) is obtained after summing the events of each year for a given month, i.e., from C s ( m ) ; last lines: χ 2 and entropy, mean, standard deviation, confidence interval, and t-test with significance level; recall that l n ( 12 ) 2.4849 and χ 11 2 ( 0.95 % ) = 4.5748 .
JSCSEntropy
N s ( y ) 31732227491360496110082573
q s ( m , y ) q s ( m , y ) q s ( m , y ) q s ( m ) q s ( m , y ) q s ( m , y ) q s ( m , y ) q s ( m )
y = 201220132014[2012–2014]201420152016[2014–2016]
January0.082020.108700.080290.090910.091060.075960.085320.08317
February0.047320.052800.094890.063530.072850.074920.076390.07501
March0.059940.093170.102190.084340.071190.091570.079370.08201
April0.097790.083850.105840.095290.087750.083250.087300.08589
May0.082020.055900.058390.065720.076160.099900.083330.08784
June0.069400.074530.069340.071190.069540.077000.093250.08162
July0.097790.096270.098540.097480.079470.092610.079370.08434
August0.069400.093170.065690.076670.059600.075960.063490.06724
September0.066250.099380.098540.087620.074500.077000.080360.07773
October0.119870.099380.054740.093100.112580.074920.093250.09094
November0.082020.077640.109490.088720.077810.089490.090280.08706
December0.126180.065220.062040.085430.127480.087410.088290.09716
χ 2 23.27814.07514.96415.81129.4979.3779.33320.236
entropy2.44872.46202.45692.47602.46212.48012.48012.4809
Mean0.083330.083330.083330.083330.083330.083330.083330.08333
Std Dev0.023590.018200.020340.011450.019230.008600.008370.00772
μ 2 σ 0.036160.046940.042650.060430.044860.066140.066580.06790
μ + 2 σ 0.130510.119730.124010.106240.121800.100530.100080.09877
t s t a t 654.12854.49705.302287.081107.623124.033287.435694.50
s i g n i f . ( p < ) 0.00010.00010.00010.00010.00010.00010.00010.0001
Table 2. Number of papers N a ( y ) and monthly percentage q a ( m , y ) of papers accepted when submitted in a given year (y) and month (m) respectively to JSCS in 2012, 2013, and 2014, and to E n t r o p y in 2014, 2015, and 2016; q a ( m ) is obtained after summing the events of each year for a given month, i.e., from C a ( m ) ; last lines: χ 2 and entropy, mean, standard deviation, confidence interval, and t-test with significance level; recall l n ( 12 ) 2.4849, and χ 11 2 ( 0.95 % ) = 4.5748 .
Table 2. Number of papers N a ( y ) and monthly percentage q a ( m , y ) of papers accepted when submitted in a given year (y) and month (m) respectively to JSCS in 2012, 2013, and 2014, and to E n t r o p y in 2014, 2015, and 2016; q a ( m ) is obtained after summing the events of each year for a given month, i.e., from C a ( m ) ; last lines: χ 2 and entropy, mean, standard deviation, confidence interval, and t-test with significance level; recall l n ( 12 ) 2.4849, and χ 11 2 ( 0.95 % ) = 4.5748 .
JSCSEntropy
N a ( y ) 1601461164223364674471250
q a ( m , y ) q a ( m , y ) q a ( m , y ) q a ( m ) q a ( m , y ) q a ( m , y ) q a ( m , y ) q a ( m )
y = 201220132014[2012–2014]201420152016[2014–2016]
January0.112500.123290.120690.118480.095240.085650.069350.08240
February0.056250.068490.103450.073460.071430.089940.078300.08080
March0.056250.054790.094830.066350.089290.098500.080540.08960
April0.068750.054790.146550.085310.092260.085650.098430.09200
May0.075000.061640.051720.063980.092260.119910.080540.09840
June0.056250.068490.077590.066350.047620.072810.093960.07360
July0.093750.075340.112070.092420.092260.079230.071590.08000
August0.050000.075340.068970.063980.059520.057820.067110.06160
September0.081250.116440.094830.097160.053570.085650.069350.07120
October0.143750.136990.051720.116110.130950.077090.091720.09680
November0.087500.109590.043100.082940.077380.077090.114090.09040
December0.118750.054790.034480.073460.098210.070660.085010.08320
χ 2 18.20017.06818.27620.80623.428614.824311.765119.5802
entropy2.43052.42912.40422.46122.44962.46952.47222.4769
Mean0.083330.083330.083330.083330.083330.083330.083330.08333
Std Dev0.029350.029760.034550.019330.022980.015510.014120.01089
μ 2 σ 0.024620.023810.014240.044680.037370.052320.055090.06155
μ + 2 σ 0.142040.142850.152430.121990.129300.114350.111570.10512
t s t a t . 373.51351.88270.17921.04691.191207.531297.692813.71
s i g n f . ( p < ) 0.00010.00010.00010.00010.00010.00010.00010.0001
Table 3. Conditional probability p ( a | s ) ( m , y ) = N a ( m , y ) / N s ( m , y ) of having a paper accepted if submitted in a given month (m) to JSCS or to Entropy in a given year (y), and the corresponding cumulated conditional probability q ( a | s ) ( m ) = C a ( m ) / C s ( m ) = y N a ( m , y ) / y N s ( m , y ) ; the sum of such probabilities is given; we also report the here so called “conditional entropy” ( c . e n t r . ), either S ( a | s ) ( y ) or S ( a | s ) . The distribution total (sum), mean, standard deviation, confidence interval, t- and z-test with p-significance level, are also reported.
Table 3. Conditional probability p ( a | s ) ( m , y ) = N a ( m , y ) / N s ( m , y ) of having a paper accepted if submitted in a given month (m) to JSCS or to Entropy in a given year (y), and the corresponding cumulated conditional probability q ( a | s ) ( m ) = C a ( m ) / C s ( m ) = y N a ( m , y ) / y N s ( m , y ) ; the sum of such probabilities is given; we also report the here so called “conditional entropy” ( c . e n t r . ), either S ( a | s ) ( y ) or S ( a | s ) . The distribution total (sum), mean, standard deviation, confidence interval, t- and z-test with p-significance level, are also reported.
Month JSCSEntropy
p ( a | s ) ( m , y ) q ( a | s ) ( m ) p ( a | s ) ( m , y ) q ( a | s ) ( m )
201220132014[2012–2014]201420152016[2014–2016]
January0.69230.51430.63640.60240.58180.54790.36050.4813
February0.60000.58820.46150.53450.54550.58330.45450.5233
March0.47370.26670.39290.36360.69770.52270.45000.5308
April0.35480.29630.58620.41380.58490.50000.50000.5204
May0.46150.50000.37500.45000.67390.58330.42860.5442
June0.40910.41670.47370.43080.38100.45950.44680.4381
July0.48390.35480.48150.43820.64580.41570.40000.4608
August0.36360.36670.44440.38570.55560.36990.46870.4451
September0.61900.53120.40740.51250.40000.54050.38270.4450
October0.60530.62500.40000.57650.64710.50000.43620.5171
November0.53850.64000.16670.43210.55320.41860.56040.5045
December0.47500.38100.23530.39740.42860.39290.42700.4160
c . e n t r . 4.01204.09704.13014.21363.79194.14504.29434.1883
sum6.07675.48095.06105.53756.69515.83435.31545.8266
Mean ( μ )0.50640.45670.42170.46150.55790.48620.44290.4856
Std Dev0.10630.12710.12970.07700.10580.07370.05280.0432
μ 2 σ 0.29390.20260.16240.30750.34630.33870.33730.3992
μ + 2 σ 0.71890.71090.68110.61540.76950.63370.54860.5719
t-test52.78646.89743.870130.3367.933135.995203.05380.07
z-test0.80340.7580.6731.2681.1981.3471.2912.190
p-level0.42210.44840.50120.20470.23090.17800.19680.0285
Table 4. Monthly information Entropy and (last line) overall information entropy for specific years S ( a | s ) ( m , y ) and for the cumulated data over the relevant time interval S ( a | s ) ( m ) for either journal so investigated; on the last lines one gives the so-called “conditional entropy”, c . e n t r . , either S ( a | s ) ( y ) or S ( a | s ) , together with each distribution mean, standard deviation, confidence interval, and t-test with significance level.
Table 4. Monthly information Entropy and (last line) overall information entropy for specific years S ( a | s ) ( m , y ) and for the cumulated data over the relevant time interval S ( a | s ) ( m ) for either journal so investigated; on the last lines one gives the so-called “conditional entropy”, c . e n t r . , either S ( a | s ) ( y ) or S ( a | s ) , together with each distribution mean, standard deviation, confidence interval, and t-test with significance level.
Month JSCSEntropy
S ( a | s ) ( m , y ) S ( a | s ) ( m ) S ( a | s ) ( m , y ) S ( a | s ) ( m )
201220132014[2012–2014]201420152016[2014–2016]
January0.254580.341990.287630.305310.315110.329630.367800.35196
February0.306500.312130.356860.334830.330620.314410.358390.33888
March0.353940.352470.367050.367850.251160.339090.359330.33619
April0.367650.360410.313080.365130.313690.346570.346570.33992
May0.356860.346570.367810.359330.265960.314410.363130.33109
June0.365650.364780.353940.362790.367650.357320.359960.36157
July0.351260.367650.351910.361550.282370.364890.366520.35702
August0.367850.367880.360410.367450.326550.367870.355170.36029
September0.296880.336030.365830.342580.366520.332530.367580.36031
October0.303900.293750.366520.317540.281680.346570.361900.34104
November0.333330.285620.298630.362570.327520.364530.324510.34518
December0.353610.367650.340450.366720.363130.367050.363370.36486
c . e n t r . 4.01204.09694.13014.21373.79194.14494.29424.1883
Mean0.334330.341410.344180.351140.31600.345410.357850.34903
Std Dev0.035970.029240.028420.021310.039220.019630.012050.01162
μ 2 σ 0.262400.282940.287340.308520.237550.306150.333760.32578
μ + 2 σ 0.406270.399890.401010.393750.394440.384670.381950.37227
t s t a t . 216.505251.492229.588577.295295.560665.5781060.131828.53
s i g n f . ( p < ) 0.00010.00010.00010.00010.00010.00010.00010.0001
Table 5. Diversity index, the exponential entropy ( e . e n t r . ), Theil index, Herfindahl–Hirschman index, and Gini coefficient, for specific years and for the cumulated data over the relevant time interval for the submitted, accepted, and accepted if submitted papers, respectively, to both investigated journals
Table 5. Diversity index, the exponential entropy ( e . e n t r . ), Theil index, Herfindahl–Hirschman index, and Gini coefficient, for specific years and for the cumulated data over the relevant time interval for the submitted, accepted, and accepted if submitted papers, respectively, to both investigated journals
JSCSEntropy
i n d e x 201220132014[2012–2014]201420152016[2014–2016]
1 D11.57411.72911.66911.89311.73011.94211.94311.952
e . e n t r . 0.086400.085260.085700.084080.085260.083730.083730.08367
T h 0.036190.022870.027970.008930.022800.004800.004800.00399
HHI0.089450.086980.087880.084780.087400.084150.084100.08399
G i 0.150630.117490.131390.073290.113690.054020.051920.04861
accepted papers
1 D11.36411.34911.06911.71911.58411.81711.84811.904
e . e n t r . 0.087990.0881140.090340.085330.086330.084630.084400.08401
T h 0.054460.055780.080730.023710.035280.015390.012750.00803
HHI0.092810.093080.096460.087460.089140.085980.085530.08464
G i 0.186460.189490.225570.121640.143350.094040.089300.07027
accepted papers if submitted in a given month
1 D55.25760.15862.18667.60244.34163.11673.27865.912
e . e n t r . 0.085040.086340.087370.084380.084780.084230.083870.08364
T h 0.020220.036140.047270.012440.017160.010700.006410.00365
HHI0.086700.089240.090560.085460.086080.085090.084420.08394
G i 0.113550.152110.159650.088200.100830.082640.061890.04808

Share and Cite

MDPI and ACS Style

Ausloos, M.; Nedic, O.; Dekanski, A. Seasonal Entropy, Diversity and Inequality Measures of Submitted and Accepted Papers Distributions in Peer-Reviewed Journals. Entropy 2019, 21, 564. https://doi.org/10.3390/e21060564

AMA Style

Ausloos M, Nedic O, Dekanski A. Seasonal Entropy, Diversity and Inequality Measures of Submitted and Accepted Papers Distributions in Peer-Reviewed Journals. Entropy. 2019; 21(6):564. https://doi.org/10.3390/e21060564

Chicago/Turabian Style

Ausloos, Marcel, Olgica Nedic, and Aleksandar Dekanski. 2019. "Seasonal Entropy, Diversity and Inequality Measures of Submitted and Accepted Papers Distributions in Peer-Reviewed Journals" Entropy 21, no. 6: 564. https://doi.org/10.3390/e21060564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop