Science topics: Information Systems (Business Informatics)Spreadsheets
Science topic
Spreadsheets - Science topic
Explore the latest questions and answers in Spreadsheets, and find Spreadsheets experts.
Questions related to Spreadsheets
I have two symmetrical questionnaires which have been structured according to a Latin square logic. There are about ten binary features differently distributed in 70 items. This list of items has to be evaluated by the participant by selecting a range of 4 possible responses. Each questionnaire has been filled by a specific sample of randomly selected participants. Each participant was given just one questionnaire ( so participants are different from 1 and 2).
I would like to do some inferential statistics on the two questionnaires and quantify the relationship between the features structuring the anwers. My results look like spreadsheets produced by Googleform : I have all the scores for each item (1,2,3,4) and I can create subgroups according to the ten features.
Hope to have been clear enough, thanks a lot !
Dear colleagues,
I am very interested in how some forest terms concerning forest restoration are interpreted in the official sources of your country.
Please, can I ask you to take a little time and fill out a small Excel spreadsheet (attached)?
Best regards,
Arthur.
I'm in need of guidance regarding the conversion of my initial ICP-OES results, which were reported in mg/L, I conducted a dilution procedure on a 26.0 mg sample using a 1:3 mixture of nitric and hydrochloric acid, resulting in a total dilution volume of 0.025 dm^3.
My current data is in mg/L and is stored in an Excel spreadsheet. Despite my efforts to calculate the wt%, I'm still facing issues and would greatly appreciate assistance from anyone who can provide guidance on achieving accurate percentage conversions.
Hello fellow 'gators' (not sure if that's what we call ourselves...)
I'm in need of some advice/links.
I'm wanting to generate city-specific Intensity/Duration/Frequency data that accounts for recent changes in rainfall patterns.
As we know the rains are-a changing and therefore our design storms and Green Infrastructure capacities should evolve in turn.
I want to take a specific city, and compare the "old" curves to the new (+future?) ones.
I am aware of how to download historical rainfall data from NOAA's Atlas 14 mission, but I'm looking for some shortcuts if you will:
Is there a specific type of "freeware" that does this automatically??
A spreadsheet that someone's developed and can share?
A manual method that can be elucidated?
Please advise, I'm sure I'm not the only one who ponders on these things.
Although I am familiar with GPT excel sheets, I couldn't find the Gt-Hbl barometer there. There are no more recent models OPX CPX barometer models.
It would be much beneficial if someone help me.
Regards, and many thanks
Rishabh
The solar radiance data on quarterly hour basis and the load data of same place for some commercial building is needed for some research problem simulation.
Hello everyone.
Could someone please tell me how or where I can get the following Excel spreadsheet programs:
1) (FC–AFC–FCA and mixing modeler: A Microsofts Excel & spreadsheet program
for geochemical modeling, of Yalcın Ersoy, Cahit Helvacı).
2) (AFC-Modeler: a Microsoft® Excel© workbook program for modelling, of Mehmet KESKİN).
3) (PETROMODELER (Petrological Modeler): a Microsoft® Excel© spreadsheet program for
modeling melting, of Emrah Yalçın ERSOY).
I will really appreciate any help, thanks.
I have a dataset of patients with ESRD and want to estimate GFR using the 2021 CKD-EPI formula.
I need to implement it so that it is not subject to human correction alone.
Hi
I am trying to set up a topmodel simulation in R. The flowlength function of the topmodel package, requires the outlet coordinate of the DEM matrix, i.e. the row and column position.
Is there a practical way to get that position? I am currently using spreadsheets to get visually the position based on my knowledge of the watershed. Unfortunately, for large DEM with high resolution, it is almost imposible.
EPMA data mineral recalculation
Hi, all.
I would like to know, how do you analyise data from the national registry that collects data on some diseases and conditions?
In past, people used MS Access a lot. Is the R a proper solution nowadays? Or do you suggest some other tools? I am looking for tool that could provides outputs in spreadsheet.
Dear All
I'm looking for an Excel-based spreadsheet to calculate P-T conditions using olivine and orthopyroxene (or clinopyroxene) major oxide chemical compositions (by microprobe).
There are some Matlab-based "programs", but I'm not too familiar to this software.
I appreciate your help.
Cheers, and keep safe and healthy
Benigno
Hi
I used my keywords and unfortunately, it return 19,000 results, I tried using Hazring's "publish or perish" to extract the results. I have even narrowed the results by searching by year individually. That did not work. I am looking to extract via csv into my excel spreadsheet. This is super urgent and if anyone can guide me in the right direction it would be greatly appreciated.
I am working on the development of Evaluation tools for the Vocational Guidance of secondary students. I developed instruments to assess personality, personal interests, academic and occupational interests, with semi-automated reports in electronic spreadsheets. But I feel a lot of limitation with these tools. I hope that by transferring these elements to a system assisted by Artificial Intelligence, tasks that assess executive functions, academic and professional interests, values, self-efficacy perception could be included with the possibility of incorporating linguistic-cultural adjustments (my country is characterized by a important cultural diversity) for social inclusion and that the system be linked to a dynamic database on academic offer (university, intermediate professional and job opportunities, perhaps, later, with scholarships available in the list of institutions that offer these careers) .
The input would be structured and reactive tasks that feed the learning machine and shorten the predictive output of academic-professional recommendations.
I would like to know the impression they have about it and if anyone has had experience in similar projects that could advise me.
I want to make sure that I can use this one to convert my results.
SBEDS is an Excel spreadsheet for blast analysis from US CORPS OF ENGINEERS
Basically I have a large excel spreadsheet of all the reasons why a specific process isn't working. It contains about 1,000 reasons, each reason can be put under 1 of 4 categories. To go through all 1,000 and individually categorise them would take too long so what sort of sample size do people recommend using and can I deduce things like the confidence interval/accuracy of my results from this sample size etc. Thank you for your help in advance
Am making carbonized biomass value chain analysis and for such work, I am calculating carbon footprint.
I would therefore appreciate if anyone can share one (carbon footprint calculator e.g., spreadsheet) with me or direct me to a link/website where I can access one.
Thank you!
Hello,
The equipment software gave me the values of Mn and Mw but I would like to check it manually because it was not me who performed the analysis and the values the analysis company gave are very strange.
The company that performed the analysis provided me with the excel spreadsheet (attached). I believe that from the data it is possible to obtain the molecular weights manually.
Thanks a lot for the help.
Hi,
I am looking for a spreadsheet-based water balance model for lake operation. Please don't refer me to papers. I am looking for an actual spreadsheet. Thanks
Hello all,
I have a large spreadsheet of antibody sequences, however in order to perform grafts properly, I need to know what numbering system was used (IMGT, Kabat, Chothia, etc). Does anyone know any software or web servers that can identify the numbering system used (preferably free resources)?
Thank you,
Theodore
Iam a student of MPhil and new in research. I want some suggestions from my seniors that how to develop new formulas on excel spreadsheet as there are numerous videos on integration differentiation. if we want to work on partial and some higher mathematics then do i have to learn JS or visual basic.
Can qualitative software, like Nvivo, be used for scoping reviews after the data has been extracted in an Excel spreadsheet.
What is the application for use?
It is difficult to determine the volume % of Cpx, Olv, and Opx in mylonitic peridotites and very fine grained ones. Sometimes, Cpx, Olv and Opx % can be estimated from bulk chemistry based on CIPW norm.
I need trusted spreadsheet for CIPW calculation. Thanks so much.
I've applied a survey with approximately 200 innovative companies, using widely validated by previous studies innovation capabilities measurement instrument, likert-scale from 1 to 5
As the capabilities of the companies (all of the involved in University-Industry Cooperation) showed very high results, most values are concentrated on 4 and 5, so, skewness is really high. Smirnov-Kolmorov showed non-normal distribution for almost all variables (spreadsheet attached)
Sample have almost no outliers. And the thing is it was not possible to foresee ex-ante that firm's capabilites would be so high.
My data analysis technique will be QCA, however, I would like to perform first the CFA first.
Please, help me! Thank all of you in advance!
I need assistance please. I am still busy with my research proposal and my feedback from my professor is somewhat vague. Its as if he didnt read through my survey questions or look at the spreadsheet with all my thought processes etc and doesnt understand what my research project is actually about. I would like a third party to give me some advice. It could be my title needs editting or I did not clearly articulate my goals. Maybe I wasnt specific enough, but I feel if he read the survey he would understand better and be able to advise me better. Unfortunately we are in different countries and different time zones, and also different first languages.
I am working on sulphide mineralization in BGC hosted by granites and mafic rocks. I want characterize the geochemical nature of the different sulphide phases.
Dear authors,
Im trying to performa a temporal phase analysis of the CMJ as described previously (Power-time, force-time, and velocity-time curve analysis of the countermovement jump: impact of training) However, it is performed in LabView program.
I have tried to perform that analysis using Excel, but I am not able to normalize data and, therefore, compare different jumps expressed as %of jump.
Anyone have an spreadsheet or know to perform this analysis in excel?
Thanks in advance
I have a large database of posts (Yaks) collected from the social media app Yik Yak (back when it was still active). I used a program to collect Yaks at random intervals from 50 randomly selected universities, stratified by US Region, "Locale" (rural vs urban), "Control" (private vs. public), and Size (large vs not large). We collected over 115,000 Yaks. My students used NVivo 11 Pro to code the data (in this case, we were looking at substance use related posts), and we ended up coding 1670 Yaks as into our Substance Use "Node." Now, I want to be able to view (and export) all 1670 of those Substance Use references, along with their associated attributes (region, locale, control, and size). We have been trying for days and can't figure out how to do this. It seems like a basic function that should be easy. Can anyone offer any help?
We want to export the data in order to run various chi-square and regression analyses, and I want the whole data set (all 1670) and not just the summary data (i.e. summed frequencies of substance use Yaks for each region). This way, the appropriate attribute values will be connected to each specific data point during the quantitative analyses.
Dear Colleagues,
Another publication related question, does anyone have any experience of becoming part of another organisation like the IEEE, BCS or ACM? I am the programme chair for the European Spreadsheets Risks Interest Group (www.eusprig.org) and we are considering joining up with an appropriate partner - the motivation for doing so is to increase visibility and credibility - is this something anyone has any experience of or any views on?
Many thanks
Simon
I am conducting a meta-analysis of Hazard Ratios (HR), however, most of the studies presented only Kaplan Meier curves without the HR and Confidence Intervals (CI). I am extracting data manually from the curves and using Tierney et al. spreadsheet to calculate the HR. Nevertheless, I am concerned about the reliability of this method.
I have already tried some programs to extract numerical data from the curves, but none of them presented the number of censored patients at each time point to calculate the number of patients at risk.
Do you know any program/software able to do this? Or any that gives the HR and CI?
Thank you!
I am new to the software package and currently going through a learning curve.
The above file is not included in the data files associated with the version I am using. In an earlier software manual, I came across this Example: Example 3: Multiple and non-linear covariates, and producing species occurrence maps which use the file: Single-season example.xls.Can someone help me find it?
I am struggling to solve the equation for X:
λ = A/X * (1 exp (-X/A))
λ and A are known values found from experimental analysis, say λ = 0.8 (varies 0.02-0.89) and A = 2 (ranges 1.25-2.5). Is it possible to solve the equation in Excel spreadsheet? I tried using Adds in option, but it is giving a wrong value. Thanks for your time and suggestions.
Hello: I am trying to find a way to make it a bit more efficient to run the lab. I have looked in a several lab management programs (Findings 2; labfolder, and some others). It seems most work well for life science labs that are not heavy on behavior. We have a need to carefully manage the booking of rooms and appartuses for behavioral tasks, so we can use them maximally. It would also be handy to have the option to manage the animal colony. At the moment, we have different spreadsheets, and google calendar. This works OK, but a centralised solution would be better that also allows a bit of a project management (have a list of standard behavioral tasks or components of behavioural protocols, allowing you to build a new experiment, book the rooms for the times you need, put that all in a calendar, update project progress accordingly), and perhaps offers even some form of communication platform. Perhpas a bit much to ask, but I would be happy with some of these components.
Any suggestions aside from dream on?
We all know it is possible to get path coefficients from correlation matrix (e.g., PATH ANALYSIS I: INTRODUCTION (ecu.edu) ). Using a simple solver on a spreadsheet, it is therefore possible to check the results of any article when a correlation matrix is provided.
The other way around, we can also from structural equation modeling coefficients obtain the correlations when they are not provided in a given article. Can I use this method to obtain correlations before using Hunter and Schmidt's meta-analysis method? Do you know a software to do it effortless because performing it using a spreadsheet is boring?
dear friends from the rock mechanics field
Good day. I have what may seem a minor problem, but I cannot crack it somehow.
I have attached a page from Hoek & Brown (1997) paper, which gives elaborate spreadsheet formulas for simulation of triaxial data and conversion to Mohr-Coulomb parameters, in absence of actual triaxial tests. All the formulas are explained and mutually connected except one.
What is the formula for signt??
It is the normal stress which must be specified in order to calculate the tangent to the Mohr envelope. There is a myriad of formulas involved with the problem and I just don't see the answer. I would greatly appreciate assistance.
Cheers, Hrvoje Vučemilovič
Hi everyone. I took a basic course on Markov Chains, and know a little about Monte Carlo Stimulations and Methods. But I never got to the part of spreadsheets.
If anyone can help direct me to a few non-technical, not to hard to read books on Monte Carlo Stimulations I would be grateful.
I have estimated the parameters by Maximum Likelihood Estimation (MLE) and Probability Weighted Method (PWM). I wish to construct the L Moment Ratio diagram, to graphically demonstrate that empirical (L-skewness, L-kurtosis) coordinates of my financial asset sample lie close to GL distribution (say), but the picture is very clumsy in R. I want to customize it, make it neat and hence i need the freedom to work in spreadsheet. Besides, an excel sheet is more intuitive. Could you kindly sir share it? I shall be grateful to you. I am willing to cite you this work in my reference, and put in in my acknowledgement section of thesis which I shall send you a copy by next July. Please.
I have bulk rock major and trace elements data available to me.
Hi,
I am generating long lists of per-cell data that I'd like to visualize as dot plots per condition. The problem is that my conditions are defined by letters rather than numbers. Or sometimes I have multiple different chemicals at different concentrations.
A typical spreadsheet has one (name) or two (name and concentration) columns that define the condition and multiple columns of various measurements. Each row is one cell. There may be hundreds of cells per condition and the number may vary between conditions.
I am having trouble picking the right type of table in Prism to generate the graphs that I want. Would anyone have an advice for me?
Thank you!
I am working on olivine, spinel, clinopyroxene and orthopyroxene. Can someone provide or assist me to find geothermobarometer spreadsheets for above mentioned minerals, single or two-mineral based geothermobarometers.
Thanks in advance
We are trying to keep track of the MGH IHP Research publication record. We wanted to find a way to track and download the publications onto an excel spreadsheet without having to do it manually.
I got the Illumina paired-end 16S sequences results. I tried to analysis them with QIIME2, but unfortunately, I found myself stuck many times. Can anyone help me,and how much will it cost? Please inbox me
Data at rest in information technology means inactive data that is stored physically in any digital form (for example; databases, data warehouses, spreadsheets, archives, tapes, off-site backups, mobile devices etc.)
Spreadsheet software provides very little structural guidance as to how to employ it safely and effectively. Spreadsheet developers are free to employ a variety of structural components to interact with their users.
Much advice concentrates on hiding/protecting the components of the spreadsheet that make it work. How can we retain retain openness of the spreadsheet presentation?
Spreadsheet has been found in solving process engineering problems and its built in function capabilities, make data manipulation much simpler and faster.
Dear QIIME2 users, I need help in preparing my map file. The google spreadsheet available at QIIME2 page is ( https://chmi-sops.github.io/mydoc_qiime2.html) confusing. Can I merge the forward and reverse index sequence in the same column or do I want to feed in separate columns? I need help if anyone prepared a map file recently. Please inbox me.
I have tried to calculate kinetics in spreadsheet by Method of Coats Redfern but i did'n get the correct result of Ea. I got negative value instead of positive. I think my formula is'n correct. Can someone help me?
I have been using Aabel (Gigawhiz) plotting and statistical software. However, it is Mac OS centric and cannot be installed on a PC. I like this software because it enables me to directly select a data point in a graph which then highlights the point in the linked spreadsheet. This is very useful when exploring data and anomalous values. However, it is only available for Mac and I am now mainly PC based. As such, can anyone recommend a PC plotting and statistical software package that provides similar functionality? Thanks in advance : -)
This is to complete a visualization of my Professor, Dr. G's research on his fish data.
What analytic software would you recommend to examine a data set I created that lists the faculty members making up 1400+ thesis committees over a ten year period?
I'm interested in seeing how often the same faculty members have served on committees over this time period. I'm also interested in coding faculty members by discipline and am hoping this layer of meaning can also be part of my analysis.
Any recommendations for how to best analyze a spreadsheet of thesis committees with these goals in mind would be appreciated!
In response to several inquiries over the past few years, I have undertaken the task of updating the thermodynamic properties of steam. It has been 25 years since the IAPWS first released SF95 so that this effort is long overdue. My goal is to extend the range of applicability up to 6000K and 150GPa, considerably above any existing formulation. The data on which to base this extension has been available since 1974. While I have not completed the task, I have made considerable progress. As some researchers have expressed an urgent need for this work, I am releasing preliminary results. All of the data plus an Excel AddIn are contained in the following archive http://dudleybenton.altervista.org/miscellaneous/AllSteam104.zip, which will be updated as the work progresses. There is an installer to facilitate selection and placement of the correct (32-bit or 64-bit) AddIn, although this is unnecessary, as the libraries (AllSteam32.xll and AllSteam64.xll) are included. The current implementation is piecemeal and not optimal, but it does function. When complete, I will also release the source code. The average error in calculated density (compared to experimental) for the 2930 data points from 18 studies (see spreadsheet H2O_PVT_data.xls) for the SF95 and 2020 formulations is 0.68% and 0.46%, respectively, The maximum error is more telling, at 267% and 40%, respectively, as shown in the attached figure. I welcome your discussion.
![](profile/Dudley-Benton-2/post/Steam-properties-to-6000K-and-150GPa/attachment/5e78d7433843b0047b36f805/AS%3A872273980448769%401584977731432/image/experimental_vs_calculated_density.gif)
Anyone know about any freely available software or code that can optimise protocol sequences for 3D unconventional electrodes setups, including borehole electrodes?
The closest research I found is the BGS optimised algorithm series (led by Paul Wilkinson) but I don´t think they are freely available - at least I couldn't find it anywhere.
I am also familiar with Electre Pro but it is neither free nor an optimisation framework.
I also know the SEER spreadsheet, but it is 2D, and very rigid in several aspects (electrodes spacing, arrays and total number of electrodes). It also does not allow exporting the protocol assessed.
Thanks.
I have inputted data into my excel spreadsheet. The data is descriptive. I was wondering if there was a way of quantifying this data, such as finding common words etc.
Any suggestions are appreciated :)
Please, How can we apply the GLUE or SUFI-2 method to the WEAP model to estimate the uncertainty? GLUE and SUFI-2 are included in SWAT-CUP, Can I find the methods in other software or excel spreadsheet to apply it?
Thanks in advance
need a good and emerging answer... Thanks
I am struggling with measuring inequality in health-workforce between regions based on number of workers in each region and region population using Atkinson index (spreadsheet is in attachment). Is there anyone willing to help me where I made mistake?
Thanks,
AM
I have carried out an experiment on cells from the same cell line, grown in two plates and exposed to different conditions (Control vs Treated)
When I collected the cells after n days of treatment, I carried out PCR targetting a specific gene, then normalised to a housekeeping gene, I carried out analysis using the DDCT method.
I now have the DCT values in a spreadsheet waiting for statistical analysis- however, I am not sure which test to use, or whether I should use DCT or DDCT values.
My DCT results seem to tick the boxes for a non-parametric test, but I am not sure if it should be paired or unpaired. My logic is that they could be paired, since they came from the same cell line, or that they could be unpaired since I am comparing Day n (control) vs Day n (treated).
Results:
Control= 12.59 11.19 12.31 10.89 11.08 11.32 10.97 8.48 10.25
Treated= 11.28 10.98 10.35 11.39 10.36 10.83 10.77 7.91 9.8
DDCT values:
-1.31 -0.21 -1.97 0.49 -0.73 -0.49 -0.2 -0.57 -0.45
Please let me know if you would like more detail. I'm still something of a statistics beginner, so I'm not sure how much depth is required to be able to answer this question.
I downloaded climate data from CCMA for the model CanESM2. I wanted to use this data as inputs for water quality models. Unfortunately, the data is in .nc format and is not usable in a spreadsheet such as excel. Is anyone could help me ? Thanks in advance for your help and advises!
Articles I have written mostly use data from other people. One possible exception was my calculation of lexical growth rates (lexical scaling on arxiv and RG), which outside of glottochronology, no one else seems to have much bothered with. To calculate lexical growth rates I used historical dictionaries of the English language. Collecting, adjudging, and organizing words in the English lexicon is a data project much vaster in scope than merely taking word counts and calculating rates. It seemed to be that in this era, there are all kinds of sources of data which a person can use as a basis for theory. I performed no experiments (unless spreadsheet calculations and forming equations as theoretical experiments count). I instead find an abundance of data. The data I found were susceptible of new theoretical investigation. So I wonder: if data in this computer age is increasing by vast amounts, can theory keep up? Will AI remedy that possible deficiency?
I'm looking for a dataset containing the seismic activity at Mt Etna between 2000-2010, and another dataset (or the same one) containing the same data for Kīlauea. Preferably they could be downloaded as a txt file or onto an Excel spreadsheet.
This arises from issues I encountered in 2007. From September 2005 to the end of May 2007 I was looking for the base of a logarithm that would connect the rate of English lexical growth, about 3.39% per decade to the rate of divergence measured by Morris Swadesh for related Indo-European languages, which he found to be less than 14% per thousand years. I was looking for a number, but after many spreadsheets and failed guesses found that a network’s mean path length (mu, say) worked as the base of the logarithm. (I wrote a paper in early 2008 about this, on arxiv.)
I noticed this peculiarity about Clog (n) where n is the number of nodes in the network, and C is the clustering coefficient. From one perspective, if there are n nodes, then log (n) = k is the number of degrees of freedom in the network (or ensemble) relative to the mean path length mu. But at the same time, log (n) = k could also represent k time periods occurring for a set of nodes equal in number to the mean path length mu. So which is it? Is k the degrees of freedom for an ensemble? Or does k represent the length of time for a single cluster of mu nodes? Is this an irrelevant mathematical curiosity? Or, is there some connection between k as degrees of freedom for n nodes relative to mu and the way that time works? Why are the number of degrees of freedom for n nodes equivalent to the degrees of freedom relative to a single cluster of mu nodes over k periods of time. For example, does this imply that an event that occurs for an entire ensemble can in some way be emulated by an event repeated over k time periods for mu nodes? Perhaps it is nothing But it seems that J. Willard Gibbs in his Elementary Principles of Statistical Mechanics used a kind of variation of this when he calculated the statistical distribution of multiple copies of the same ensemble. Do these mathematical aspects indicates something about the nature of time, that a period of time for mu nodes can correspond to an event at a point of time for an entire ensemble? Does that seem to echo Minkowski’s notion of space time?
I am currently trying to use the spreadsheet provided in the paper for calculating the melt pool size for the other materials. My problem is that I am not able to find the right relation to relate B and p with u and P. Could someone please help me with this.
Thanks
Shivam
Hello all,
I am looking to calculate Cohen's d for effect size, ideally using a reliable excel spreadsheet.
I understand the equation for d = M1 - M2 / SDpooled, but using a spreadsheet would be much quicker
Also, on the same note, my aim was to calculate d either the equation above using the mean and SD differences between pre-and-posting test measures. So if Group A scored 23.0 in test 1 and 24.5 in test 2, the mean difference I would use would be 1.5. If Group B scored 25.4 in test 1 and 26 in test be the equation would be
1.5 - 0.6 / SDpooled...I hope that makes sense.
Or use independent sampled t-tests using the mean difference between pre-and-post testing measures. Does either seem like a sensible method of calculated d?
You help and advice would be greatly appreciated.
Regards,
Lee
Am starting up a laboratory and want to have lab inventory management in place. Have been use to having spreadsheets on google sheets to keep track, but there must be something better out there so that you can keep track of amounts of reagents left, and have better quality control over stock.
What's the best one out there?
Hi There,
I administered an online questionnaire and all my responses have been properly coded.
However, I am facing a couple of issues.
1. Non-response: I understand that missing fields are common in questionnaires. Is it necessary (some textbooks recommend) to replace the '-' (dash) with a '999' when cleaning up the data? I am afraid that the '999' affects my analysis.
2. Incomplete questionnaires: Should incomplete questionnaires be included as part of the analysis or are they considered void? If considered void, is it recommended that I delete the cases or leave the spreadsheet like how it was when i exported it to SPSS?
GREATLY appreciate any advice. Thank you.
Hello,
I am working on stratigraphy and lithogeochemistry of a VMS-hosting sequence of Paleoproterozoic volcanic rocks and would like to plot drillcore samples that I collected into IoGas and Geoscience Analyst. I only have the collar location, drillhole orientation and sample depth, however both softwares require the ZYX coordinates to display the sample accurately. I know these calculation can easily be done with Gocad, Target or other 3D mining softwares, but I am wondering if there is another (cheaper) option where I could process small batch of data on a need basis. It does not have to be something that takes into account all the drillhole deviation, I am not looking for that kind of precision. Thank you.
Simon
Dear everyone,
I am trying to calculate the pore volume using the alpha-s method and have plotted the data from the sorption analysis against a reference with similar surface chemistry. I understand that I can get the micropore properties at low pressures. However, when I see references, I see that the micropore volume from the plot is above 3 times higher in order than those I usually see in references. Is there a formula to be used to calculate the micropore volume from the values in the plot? Please find attached my spreadsheet/a picture of my plot.
Kind regards,
David
![](profile/David-Buentello-Montoya-2/post/How-to-calculate-micro-mesopore-surface-area-pore-volumes-using-the-alpha-s-method/attachment/5ba8965acfe4a76455f58691/AS%3A674292190699529%401537775194351/image/alpha-s.jpg)
I need these file regarding my research
hello
i testing whether some firm-specific variables are correlated with firms' leverage ratio using panel data. i want to include dummy variables that are time-invariant in my regression model. is that correct? how to do it? is it by adding a new column for each dummy variables in my spreadsheet? e.g. industry classification
thanks
Dear Monica,
Hi,
What are the specific objectives of this project? I have some experience with financial management theory and spreadsheet development, perhaps I can be part of the action.
Regards,
Ernest.
I'm going to perform seismic analysis on LUSAS, FEM software. Downloaded ground motion data (accelerograms) from CESM database in the form of a text file. I have no clue how to move forward. I just know that I need two columns in a spreadsheet where one represents time, while the other one- acceleration.
Any help would be appreciated!
Hello,
I recently had a batch of addresses geocoded by Texas A&M geocoding service, and I am quite confused as to how to get all those dots to appear on my map. When I add the map to ArcMap table of contents, it just appears as a spreadsheet. Do I need to join it to an already existing layer, such as a tigerfile? When I right click on the geocoded spreadsheet in the table of contents, it asks me if I want to geocode the addresses, which is a no, but then it says displays xy coordinates. Is that it? When I tried to do this it just said something about can't do this, because there are no object IDs, or something like that, and after clicking OK, only a single coordinate dot appeared on my map! I have over 5,000!
Thank you
I would like to calculate fwhm from xrd data (2 theta ~ intensity) I was given in excel spreadsheet format. I also happen to have X'pert highscore, but unfortunately I don't have any idea how to import the xrd data into the software. Can anyone please help me with this problem?
Hello,
So I am having some spreadsheet issues, using excel. So I'll get right to it.
I am keeping population count data of every county in the USA.
However, I do not have population data for Puerto Rico.
I would like to take my spreadsheet of every USA county, and delete all counties/municipalities that belong to Puerto Rico, or any other U.S territories outside of the 50 states.
How can I do this?
Is there some way to take my spreadsheet that has info for only 50 states counties, select them, then go to the spreadsheet that has info for all 50 states counties, plus territories, and tell excel, hey, delete all rows that don't match the entries of my other spreadsheet?
Thanks!
Hello,
So I am trying to do some excel gymnastics here and am having a lot of trouble sorting my data and figuring out how to even explain what it is I want to do. Basically, I have an original excel spreadsheet and a new excel spreadsheet, which is just an update of the original, containing more data. So the new spreadsheet has four more rows than the original. How do I take the new column and have it replace the original column, while maintaining the order?
I'll try and help visualize the issue here with a made-up example regarding number of universities in each state:
Original spreadsheet (2016):
FID State Universities
1 AL 124
2 KY 155
3 CA 166
4 NY 176
5 UT 98
New spreadsheet (2017):
State Universities
AL 127
MI 133
AZ 188
KY 150
CA 166
TN 145
NY 179
UT 98
So now we have 2017 data for the original states, and also for some new states. How do I take the data from the 2017 spreadsheet and merge it into the 2016 spreadsheet, while keeping the same order with reference to FID?
Thank you!
I have all of my 38 years worth of research from the archives and publications archives on an excel spreadsheet I am looking for someone you can tell me how to load it here on research gate.
I need to look at butterfly UKBMS transect data and, put simply, due to an syncing error on my uni's part, we've been given an assignment asking us to do stats tests even though we haven't even begun learning about stats. So I've figured out I need to do a Chi-squared since I'm dealing with categories and enumerations. Problem is, not having done a single stats test before, this is rather overwhelming. I've been given spreadsheets for 2008-2017 with counts of around 30 butterfly species for, each year containing 1-26 weeks and counts for each of those weeks.
What hypothesis could I test with Chi-squared? I've read into the tests and have a basic grasp of what I'd need to do for the test itself (association, goodness of fit, cross tabulation depending on setup), but just not sure how to go about forming a hypothesis for that much data or how to really set it up.
I did a goodness of fit test for the total counts for the Large Skipper for 2008-2017 and it comes out as highly significant. Same for the association test I did comparing July mean temperature and Large Skipper abundance for 2008-2013. Problem is, I'm not sure what any of this means if I'm being quite honest. For example, association test gives the highest contribution to 2011 (over 50%). This is because there are 77 Large Skipper for that year and there just so happens to be a higher mean temp then. There could be a thousand reasons for their having recorded 77 Large Skipper.
In May 2017, I posted my first research regarding to the relationship between prime number and Fibonacci number
https://www.linkedin.com/pulse/relationships-between-prime-number-fibonacci-thinh-nghiem/
I have chance to go further in this subject. In detail, I realized that a prime number can be analyzed into sum of many Fibonacci numbers. Below are some examples:
29 = 21 + 3 + 5
107 = 89 + 13 + 5
1223 = 987 + 233 + 3
I have successfully analyzed the first 1,000 prime numbers with above methodology. Calculation can be found in
https://docs.google.com/spreadsheets/d/1sGmyr9dZwLhfFWcSgwviwm2X838h0CQF4KWRqX_eXkA/edit#gid=685523897
I have tried unsuccessfully to limit the series up to only 3 Fibonacci numbers. As you see in my shared worksheet, some prime numbers are calculated to 6 or even 7 Fibonacci numbers. I expect that in next research, a simpler formula between these types of numbers can be discovered.
All feedback is welcome.
Regards,
Thinh Nghiem
The CBE (http://comfort.cbe.berkeley.edu/) website is difficult to use for big sample data. I want to calculate ASHRAE 55, EN-15251 and adaptive thermal comfort of 1000 samples. Is there any source to download a spreadsheet to make calculations faster?
Is there any way to draw/show routes in google map from excel spreadsheet? I have some O/D routes of bike (O/D latitude and longitude in excel) which need to draw in google map and convert to shapefile. Is there any way to draw all these routes at once? I can draw this manually one by one, but I want to know if there is way to draw all routes at once. Thanks.
I've only seen it for routing single-event hydrographs. I'm trying to model reservoir water levels and spillway flows using a 10-year inflow time-series, bathymetry (already calculated the stage-storage and stage-area relationships), ogee spillway rating equation, and prescribed minimum conservation flows through separate outlets. Using a basic spreadsheet water-balance for daily or hourly time-step, the head and flows over the spillway are unrealistically large. 15-minutes helped but is straining my computing resources. Can level-pool (storage indication) be applied here? Any special considerations/assumptions needed?
I need a free program, to import the spectrophotometer reading to Microsoft Excel (2007 to 2013) under windows operating system (win7 to 10) .
our spectrophotometer had a RS232 port and we would to measure the SOD activity . So, we need a free software to transfer the O.D reading to excel in short specific time .
I'm getting stuck in particular once my ROIs are selected and telling it to count foci. Each cell is getting a spreadsheet with no headings telling me what each column represents. I'm not interested in doing a batch mode as I don't have that many pictures. A) can all cell data be put into one single sheet? B)what are my headings?
Thanks for any help.
betty
We have disseminated and received feedback on an internal customer service survey. No one in the unit is particularly skilled in statistics or data analysis. The survey itself was developed in Survey Monkey using a 5 point Likert scale along with questions about demographic information. Because Survey Monkey was new to staff, the survey was set up in a way that prevents certain comparisons and the aggregating of responses to a group of responses cannot be cleanly performed.
I have downloaded all the data into Excel and am seeking help with the following:
I have 6 categories of respondents with 13 to 536 individuals per category answering questions under 6 themes. The themes have 2 – 6 questions each.
What I would like to do is obtain the average rating for each theme by the each respondent category.
As stated, no one here is statistically savvy so in case I’m not making sense, I will provide an example. Our staff is divided into the following categories: administration, program 1, program 2, program 3… program 6. I want to obtain the average rating given by a category (eg administration) for the theme (eg Agency teamwork which has six questions). My spreadsheet is in Excel.
Any help you can provide will be greatly appreciated.
Does anyone know if I can calculate Vuong's test in SAS or if not, does
anyone have an Excel spreadsheet that can do the trick.
Vuong (1989) test is a likelihood ratio test for non-nested model selection.
Vuong test statistics allows both models to have explanatory power, but
provides direction concerning which of the two is closer to the true data
generating process. Simply put I want to use Vuong test to calculate a
Z-score (compare R-square for the two models) in order to choose between two
models.
For instance I would like to compare:
P = EARN + BVE to
P = ADJEARn + ADJBVE
where P is stock price, EARN is accounting earnings, BE = book value of
equity
and ADJ means adjusted.
I need this badly to do some final testing in my ph.d. dissertation
Any help is highly appreciated
Hello!. Has someone an Excel spreadsheet operating under windows of the geothermometer and oxygen-barometer of Ghiorso and Evans (2008)?
The excel spread has many variables and formulas. I need assistance to implement an optimization model that minimise the difference between the observed and predicted evaporation over a given period of time e.g N days.The idea is to estimate two parameters using the model. i have challenges in implementing such a model. Your asistance will be greatly appreciated
Hi,
I am Writing a thesis about predicting abnormal stock returns based on sentiment analysis of tweets.
More specifically we have a huge datasets of tweets, corresponding to a randomized sample of about 1% of all tweets during a year.
Now, we want to sort out the tweets mentioning the companies in the index we are looking at, which is EURO Stoxx 50.
We now want to sort or dataset for tweets containing any cash-tag ($) for our companies. For example AztraZeneca will be $AZN for their ticker symbol. So for this index we will filter for a list of any of 50 cashtags. How can we do this? Preferably in excel.
I enclosed a Picture of how our spreadsheet looks like, as well as a sample of the dataset.
KR
Benjamin
![](profile/Benjamin-Warberg/post/How_do_I_filter_a_dataset_for_specific_words_contained_in_a_column_Tweets/attachment/59d623c36cda7b8083a1e970/AS%3A348290831929344%401460050413421/image/Screen+Shot+2016-04-05+at+16.50.34.png)
Hello,
I think an example would be the easiest way to be clear: if I were to go to the URL (http://www.ncbi.nlm.nih.gov/pubmed/26644394) and then want to have an excel spreadsheet - or I can go outside excel, though it is my comfort zone - auto populate with certain portions of the page, how would I do it/set it up? For instance, I might have a column for pubmed ID (26644394 for this paper, it is in the URL or also always re-listed in the bottom left hand side of the Pubmed page for each entry), another column for the full title of the paper, another with author names listed, one with year published... and maybe things that would require 'clicking' such as under the drop down 'Author information' section or maybe the full name of the journal (note that the URL gives the abbreviated name, so that might be complex/require matching the standard abbreviation the Pubmed URL lists to a database at a different URL and taking the full name from there).
The desire is that I could automate the process so I only specify the pubmed URL and the other entries automatically populate. Many programs do this for certain entries so I am sure in principle it is not difficult, but I don't know how to do it so that the specific information the user wants is easily generated - rather than a pre-set version from a program like Mendeley which might simultaneously generate more information fields than you need while at the same time not collecting certain information you'd like - once you set up the initial protocol.
Does anyone know how one might go about this - Excel or not, and requiring a software download or other work around, this would be really helpful to me and perhaps other users on this site. So thank you for your suggestions!
Please have a good day.
PS - if you can figure out how to also collect total number of citations a paper has, and the impact factor of the journal at the time the article in question was published, i.e. through a workaround via a separate source, that would amaze me and I would appreciate thoughts on that too! Thank you
Examples can be molecular dynamics, chemical reactions, particle physics, astronomy, even weather patterns. It's the data in flat-form, e.g. spreadsheets that I need.Attached is an example for exoplanet data from a book and the clusters my algo identified. The data can be multi-dimensional. I will take care of that.Thank you in advance.
![](profile/Tony-Scott-2/post/can_anyone_point_me_to_some_data_for_cluster_analysis/attachment/59d622d66cda7b8083a1d2f9/AS%3A298635301539855%401448211612405/image/planets_clusters.jpg)
Thanks in advance for your replies.
I work a lot with Landsat data, but am by no means an expert. I suspect many researchers would find it useful to query the landsat scene database for reflectance values for an array of points without going through the steps of downloading tiles and then processing them individually in Arc. Has anyone found (or written) code or an efficient workflow that does this?
Ideally, one should be able to use the solution to download selected band reflectance values (either coincident or averaged for a small area) into a spreadsheet with over 100 points from multiple time-points and for multiple tiles.
This "program" or spreadsheet needs to be simple enough so that someone with limited training can enter their point count observations and generate a density estimate. The user will not need to know anything more than how to enter the data. The calculation of density will be entirely black box.
I have created a course module for final year agriculture undergrads "Cloud Computing and Ground Truth Data" to teach them how to use Python to capture and analyse data. I am surprised by how often they resort to spreadsheets and GENSTAT for data crunching which are not appropriate. I have stopped using MATLAB as every library I need is now open source in Python. Python is particularly useful in crunching large longitudinal data files of movement and rumen data. Could we set up a standardized library?