Figure 1 - uploaded by Snehanshu Saha
Content may be subject to copyright.
Plot of interior CDHS i (Left) and surface CDHSs (Right) for DRS  

Plot of interior CDHS i (Left) and surface CDHSs (Right) for DRS  

Contexts in source publication

Context 1
... formulation serves this purpose, where elasticities may be adjusted via fmincon or fitting algorithms, in conjunction with the intrinsic property of the CD-HPF that ensures global maxima for concavity. Our simulations, that include animation and graphs, support this trend (see Figures 1 and 2 in Section 3). As the physical parameters change in value, so do the function values and its maximum for all the exoplanets in the catalog, and this might rearrange the CDHS pattern with possible changes in the parameters, while maintaining consistency with the database. ...
Context 2
... optimal surface CDHS are obtained at γ = 0.8 and δ = 0.1. Using these results, 3-D graphs are generated and are shown in Figure 1. The X and Y axes represent elasticities and Z-axis represents CDHS of exoplanets. ...
Context 3
... 4, 5 and 6 show results for CRS, where the sum of the elasticities = 1 (The theoretical proof is given in Appendix C). The approximation algorithm fmincon initiates the search for the optima by starting from a random initial guess, and then it applies a step increment or decrements based on the gradient of the Figures 1 and 2 show all the elasticities for which fmincon searches for the global maximum in CDHS, indicated by a black circle. Those values are read off from the code (given in Appendix E) and printed as 0.8 and 0.1, or whichever the case may be. ...
Context 4
... surface plot essentially), corresponding to 664 exoplanets under consideration. Each frame is a visual representation of the outcome of CD-HPF and CDHS applied to each exoplanet. The X and Y axes of the 3-D plots represent elasticity constants and Z-axis represents the CDHS. Simply stated, each frame, demonstrated as snapshots of the animation in Figs. 1 and 2, is endowed with a maximum CDHS and the cumulative effect of all such frames is elegantly captured in the animation. Training-testing process is integral to machine learning, where the machine is trained to recognize patterns by assimilating a lot of data and, upon applying the learned patterns, identifies new data with a reasonable ...
Context 5
... us consider the CDPF for gaining more insight to computing the elasticity for maximization of the CDHS (Saha et al., 2016). The following heuristic produces easy and quick way to compute elasticity in real time. ...
Context 6
... agree that equation 8 is not a linear combination of Y1 and Y2 from equations 5 and 6, respectively. The formulas for Y1 and Y2 are used to show simulation visualization ( Figures 1 and 2) . The difference between the function values in equations 7 and 8 is insignificant as we saw by computing the CDHS both ways. ...

Citations

... Obviously, this could be true for a large number of subjects across length and breadth of contemporary research therefore recourse to a scientifically more acceptable method should always be of interest and often beneficial for a large set of users. To demonstrate this, we therefore considered the case of the Journal entitled, Astronomy and Computing, within the context of SCOPUS Journals in the relevant domain of AstroInformatics ( Bora et al. (2016)), in particular and Astronomy and Astrophysics, in general. ...
Article
Full-text available
It is well-known that ranking of journals, whether in science, technology, engineering or in social sciences, such as in economics, is a contentious issue. For many subjects, there is no correct ranking, but a universe of rankings, each a result of subjective decisions made by the developers. The subjective element in journal rankings not only complicates matters about the correctness but also about outcomes that depend crucially on adoption and analysis of rankings. For science and related subjects, SCOPUS and SCIMAGO hold some of the best journal ranking systems to this day, using their CiteScore and SJR indicators, respectively, to rank journals. However, owing to the manner in which both these indicators are considered, it is often the case that the received ranking might not always display the true quality and outreach of a specific scientific journal. Obviously, this could be true for a large number of subjects across length and breadth of contemporary research and recourse to a scientifically more acceptable method should always be of interest to a large user base. To demonstrate this, we considered the case of the Journal entitled Astronomy and Computing listed in SCOPUS for the domain of Astronomy and Astrophysics. The primary focus of this case study is to determine where the Journal Astronomy and Computing vis-a-vis other journals established prior to it. The algorithm also tests the validity of the ranking and suggests an alternative rank that uses a more holistic approach towards important features. While this paper focuses on a specific journal, it is easy to see that the purpose of this construct is broad-based and deep-seated at the same time, such that the applications of the algorithms can be adopted by numerous other subjects grappling with the same problem.
... The habitability problem has been tackled in different ways. Explicit Earth-similarity score computation (Bora et al., 2016) based on parameters mass, radius, surface temperature and escape velocity developed into Cobb-Douglas Habitability Score helped identify candidates with similar scores to Earth. However, using Earth-similarity alone to address habitability is not sufficient unless model based evaluations (Saha et al., 2018c) are interpreted and equated with feature based classification . ...
... One of the key reasons to classify exoplanets is the limitation of finding out habitability candidates by Earth similarity alone, as proposed by several metric based evaluations, namely the Earth Similarity Index, Biological Complexity Index (Irwin et al., 2014b), Planetary Habitability Index (Méndez, 2018) and Cobb-Douglas Habitability Score (CDHS) (Bora et al., 2016). In Saha et al. (2018), an advanced tree-based classifier, Gradient Boosted Decision Tree was used to classify Proxima b and other planets in the TRAPPIST-1 system. ...
... We note, the motivation of SBAF is derived from using kx α (1 − x) 1−α as discriminating function to optimize the width of the two separating hyperplanes in an SVM-like formulation. This is equivalent to the CDHS formulation when CD-HPF (Bora et al., 2016) is written as y = kx α (1 − x) β where α + β = 1, 0 ≤ α ≤ 1, 0 ≤ β ≤ 1, and k is suitably assumed to be 1 (CRS condition). The representation ensures global maxima (maximum width of the separating hyperplanes) under such constraints (Bora et al., 2016). ...
Preprint
Full-text available
We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets. Neural networks, although a powerful engine in supervised methods, often require expensive tuning efforts for optimized performance. Habitability classes are hard to discriminate, especially when attributes used as hard markers of separation are removed from the data set. The solution is approached from the point of investigating analytical properties of the proposed activation functions. The theory of ordinary differential equations and fixed point are exploited to justify the "lack of tuning efforts" to achieve optimal performance compared to traditional activation functions. Additionally, the relationship between the proposed activation functions and the more popular ones is established through extensive analytical and empirical evidence. Finally, the activation functions have been implemented in plain vanilla feed-forward neural network to classify exoplanets.
... This leads to a postulation that NR can be extrapolated to other entities, particularly weaker ones such as Exoplanets assuming that the necessary conditions of the source are met. In our case the binary in-spiral of black holes is extended to the Star-Planet Binary pair of an Exoplanet system [3], [4]. ...
Preprint
Full-text available
Gravitational waves has been a subject of serious study in the modern day astrophysics community. Where on one end the strain produced by gravitational waves on matter could be practically studied by Laser Interferometers such as LIGO on the other end the ripples generated by celestial bodies a priori obtained by numerical relativity in the form of waveforms. It is often the case that these waveforms are mostly used to study the properties of black holes, neutron stars and less so on other Astrophysical entities. The study in this manuscript tries to extrapolate such methodologies to weaker celestial bodies for the primary purpose of adding a new dimensionality in the prudent realm of possibilities. There is a necessity to approach such studies from a statistical perspective given its complication while studying weak massed entities. Employing Machine Learning techniques not only assist in analyzing data effectively but also aids in creating an automated generalized computational model. We extended our method to classify exoplanets establishing interpolation between GW and exoplanets.
...  The ranking of Web pages by importance, which involves an iterated matrix-vector multiplication where the dimension is many billions.  Searches in "friends" networks at social-networking sites, which involve graphs with hundreds of millions of nodes and many billions of edges  The search for life on the planets outside the solar System, which involves data analytics on massive data volume generated from next-generation telescopes (Bora, 2016). ...
Chapter
Full-text available
1.Apache software foundation projects for Big Data 2.Data acquisition 2.1 Freely available data set sources 2.2 Data collection through API’s 2.3 Web scraping 2.4 Web crawling 3. Data pre-processing and cleanup 3.1 Need for Hadoop Map Reduce and other languages for big data pre processing 3.2 Comparison of Hadoop Map Reduce, Python and R 3.3 Various examples of cleansing methods and routines. 4. Data analysis 4.1 Big data analysis using Apache Foundation Projects 4.2 Various language support by Apache Hadoop 4.3 Apache spark for big data analytics 4.4 Case study: Dimensionality reduction using Apache Spark: an illustrative example from Scientometric big data 4.4.1 Why SVD? The solution may be the problem 4.4.2 Need for SVD 4.4.3 SVD using Hadoop Streaming with NUMPY 4.4.4 SVD using Apache Mahout 4.4.5 SVD using Apache Spark 4.5 Data analysis through visualization using Python