Science topic

Spreadsheets - Science topic

Explore the latest questions and answers in Spreadsheets, and find Spreadsheets experts.
Questions related to Spreadsheets
  • asked a question related to Spreadsheets
Question
4 answers
I have two symmetrical questionnaires which have been structured according to a Latin square logic. There are about ten binary features differently distributed in 70 items. This list of items has to be evaluated by the participant by selecting a range of 4 possible responses. Each questionnaire has been filled by a specific sample of randomly selected participants. Each participant was given just one questionnaire ( so participants are different from 1 and 2).
I would like to do some inferential statistics on the two questionnaires and quantify the relationship between the features structuring the anwers. My results look like spreadsheets produced by Googleform : I have all the scores for each item (1,2,3,4) and I can create subgroups according to the ten features.
Hope to have been clear enough, thanks a lot !
Relevant answer
Answer
Hi, after a meeting with a statistician from my lab I've found that a good answer to my question is to apply ordinal regression and then multiple comparisons on the scores obtained. I will rearrange my database and check if this solution bring interpretable results
  • asked a question related to Spreadsheets
Question
1 answer
Dear colleagues,
I am very interested in how some forest terms concerning forest restoration are interpreted in the official sources of your country.
Please, can I ask you to take a little time and fill out a small Excel spreadsheet (attached)?
Best regards,
Arthur.
Relevant answer
Answer
  • Le choix des espèces qui s'adaptent, voilà un domaine épineux ponctué de difficultés.
  • Il me semble qu'il faut parfois garder les espèces qui ont bien poussées sur les leurs site.
  • L'introduction d'autres sujets exotiques doit s'effectuer lentement et surement en tenant compte des conditions stationnelles et des exigences des écotypes (types de climat, températures minimales ou maximales, types de sol, son micro-relief, etc....).
  • Enfin c'est un sujet qui mérite d’être non seulement vu mais aussi étudié et développé.
  • asked a question related to Spreadsheets
Question
1 answer
I'm in need of guidance regarding the conversion of my initial ICP-OES results, which were reported in mg/L, I conducted a dilution procedure on a 26.0 mg sample using a 1:3 mixture of nitric and hydrochloric acid, resulting in a total dilution volume of 0.025 dm^3.
My current data is in mg/L and is stored in an Excel spreadsheet. Despite my efforts to calculate the wt%, I'm still facing issues and would greatly appreciate assistance from anyone who can provide guidance on achieving accurate percentage conversions.
Relevant answer
Answer
Hi Sheeraz,
To calculate the weight percentage from the concentration obtained on the ICP, you can apply this formula:
wt.% = ((Intial Conc. in mg/L X Vol. in L)/Mass in mg) x 100
For example, a sample with a concentration of 10.8 mg/L Al:
((10.8mg/L X 0.025L)/26.0mg) X 100 = 1.03 wt%
  • asked a question related to Spreadsheets
Question
6 answers
Hello fellow 'gators' (not sure if that's what we call ourselves...)
I'm in need of some advice/links.
I'm wanting to generate city-specific Intensity/Duration/Frequency data that accounts for recent changes in rainfall patterns.
As we know the rains are-a changing and therefore our design storms and Green Infrastructure capacities should evolve in turn.
I want to take a specific city, and compare the "old" curves to the new (+future?) ones.
I am aware of how to download historical rainfall data from NOAA's Atlas 14 mission, but I'm looking for some shortcuts if you will:
Is there a specific type of "freeware" that does this automatically??
A spreadsheet that someone's developed and can share?
A manual method that can be elucidated?
Please advise, I'm sure I'm not the only one who ponders on these things.
Relevant answer
Answer
Also just found this interesting open-source tool: RainIDF
An Excel add-in for derivation of rainfall intensity-duration-frequency (IDF) curves, based on annual maxima and partial duration series.
A research paper for RainIDF has been published in Journal of Hydroinformatics: http://www.iwaponline.com/jh/015/jh0151224.htm
System & Requirements: Windows PC Microsoft Excel 2007/2010
  • asked a question related to Spreadsheets
Question
2 answers
Although I am familiar with GPT excel sheets, I couldn't find the Gt-Hbl barometer there. There are no more recent models OPX CPX barometer models.
It would be much beneficial if someone help me.
Regards, and many thanks
Rishabh
Relevant answer
Answer
Hi How are you? Rishabh
  • asked a question related to Spreadsheets
Question
5 answers
The solar radiance data on quarterly hour basis and the load data of same place for some commercial building is needed for some research problem simulation.
Relevant answer
Answer
NASA power data access portal, For India you can also get from VEDAS.
  • asked a question related to Spreadsheets
Question
1 answer
Hello everyone. Could someone please tell me how or where I can get the following Excel spreadsheet programs:
1) (FC–AFC–FCA and mixing modeler: A Microsofts Excel & spreadsheet program for geochemical modeling, of Yalcın Ersoy, Cahit Helvacı).
2) (AFC-Modeler: a Microsoft® Excel© workbook program for modelling, of Mehmet KESKİN).
3) (PETROMODELER (Petrological Modeler): a Microsoft® Excel© spreadsheet program for modeling melting, of Emrah Yalçın ERSOY).
I will really appreciate any help, thanks.
Relevant answer
Answer
I think that every SPSS software have excel spreadsheet included in it,n
  • asked a question related to Spreadsheets
Question
7 answers
I have a dataset of patients with ESRD and want to estimate GFR using the 2021 CKD-EPI formula.
Relevant answer
Answer
Hello Dineo
Here attached the code to calculate eGFR according to the CKD -EPI 2021
gen eGFR01 = .
replace eGFR01 = 142 * (PreopCreatinine/0.9)^(-1.2) * 0.9938^Ageatdx if SexM1==1 & PreopCreatinine > 0.9
replace eGFR01 = 142 * (PreopCreatinine/0.9)^(-0.302) * 0.9938^Ageatdx if SexM1==1 & PreopCreatinine <= 0.9
replace eGFR01 = 142 * (PreopCreatinine/0.7)^(-1.2) * 0.9938^Ageatdx * 1.012 if SexM1==0 & PreopCreatinine > 0.7
replace eGFR01 = 142 * (PreopCreatinine/0.7)^(-0.241) * 0.9938^Ageatdx * 1.012 if SexM1==0 & PreopCreatinine <= 0.7
best regards.
  • asked a question related to Spreadsheets
Question
3 answers
I need to implement it so that it is not subject to human correction alone.
Relevant answer
Answer
From my experience, this could be done in about 7 steps:
  1. Determine the problem you want to solve with AI: AI can be used for a variety of tasks in Excel, from data analysis to predictive modeling. Before you begin, it's important to identify the specific problem you want to solve and how AI can help.
  2. Choose an AI tool or add-in: Excel has several AI tools and add-ins available, such as the Azure Machine Learning add-in or the Power Query tool. Choose the one that best fits your needs and expertise.
  3. Prepare your data: AI requires clean and well-organized data to work effectively. Make sure your data is formatted properly and free of errors or inconsistencies.
  4. Train your AI model: Depending on the AI tool you choose, you may need to train your model using a set of data. This will help the model learn and make more accurate predictions or analyses.
  5. Deploy your AI model: Once your model is trained, you can deploy it in your Excel spreadsheet. This may involve using an add-in or integrating with other tools, depending on the specifics of your project.
  6. Test and refine your model: After deploying your AI model, it's important to test it thoroughly and refine it as needed. This may involve tweaking parameters, adjusting data inputs, or updating the model based on new data.
  7. Monitor and update your model: AI models require ongoing monitoring and updating to stay accurate and effective. Make sure to regularly review and update your model as needed to ensure it continues to meet your needs.
Overall, deploying AI in an Excel spreadsheet requires a combination of technical expertise, data preparation skills, and problem-solving abilities. With the right tools and approach, however, AI can help you unlock valuable insights and make more informed decisions in your work. I hope this works out for you. Good luck. However, you may also want to look into Python.
  • asked a question related to Spreadsheets
Question
2 answers
Hi
I am trying to set up a topmodel simulation in R. The flowlength function of the topmodel package, requires the outlet coordinate of the DEM matrix, i.e. the row and column position.
Is there a practical way to get that position? I am currently using spreadsheets to get visually the position based on my knowledge of the watershed. Unfortunately, for large DEM with high resolution, it is almost imposible.
Relevant answer
Answer
To get the outlet coordinate (row and column) of a DEM matrix for Topmodel in R, you can use the following steps:
1. Load the necessary packages in R:
R
library(raster)
library(hydroTSM)
library(topmodel)
2. Read in your DEM data using the raster package:
R
dem <- raster("path/to/dem.tif")
3. Define the flow direction and accumulation using the terrain function from the raster package:
R
flow_dir <- terrain(dem, "flowdir")
flow_acc <- terrain(dem, "flowacc")
4. Calculate the stream network using the topmodel package:
R
stream_net <- topmodel(flow_dir, flow_acc)
5. Get the outlet cell of the stream network using the hydroTSM package:
R
outlet_cell <- getOutletCell(stream_net)
6. Convert the outlet cell to row and column coordinates using the raster package:
R
outlet_row_col <- xyFromCell(dem, outlet_cell)
The outlet_row_col variable now contains the row and column coordinates of the outlet cell of your DEM.
  • asked a question related to Spreadsheets
Question
3 answers
EPMA data mineral recalculation
Relevant answer
Answer
  • asked a question related to Spreadsheets
Question
3 answers
Hi, all.
I would like to know, how do you analyise data from the national registry that collects data on some diseases and conditions?
In past, people used MS Access a lot. Is the R a proper solution nowadays? Or do you suggest some other tools? I am looking for tool that could provides outputs in spreadsheet.
Relevant answer
Answer
Aleksandar P. Medarevic - so you probably know this: The Epidemiologist R Handbook (https://epirhandbook.com/en/index.html)? Maybe there are some of the functions you're looking for already prepared to just copy+ paste?
  • asked a question related to Spreadsheets
Question
2 answers
Requsting a spreadsheet
Relevant answer
Answer
Thanks for your reply to my question. Unfortunately, Dr Verma passed away in 2020. I have contacted Dr. Armstrong-Altrin and he kindly directed me to resolving this matter
  • asked a question related to Spreadsheets
Question
13 answers
Dear All
I'm looking for an Excel-based spreadsheet to calculate P-T conditions using olivine and orthopyroxene (or clinopyroxene) major oxide chemical compositions (by microprobe).
There are some Matlab-based "programs", but I'm not too familiar to this software.
I appreciate your help.
Cheers, and keep safe and healthy
Benigno
Relevant answer
Answer
If you want Opx-Cpx thermometery than I can send you or even for Olivine-pyroxene also
  • asked a question related to Spreadsheets
Question
1 answer
Hi
I used my keywords and unfortunately, it return 19,000 results, I tried using Hazring's "publish or perish" to extract the results. I have even narrowed the results by searching by year individually. That did not work. I am looking to extract via csv into my excel spreadsheet. This is super urgent and if anyone can guide me in the right direction it would be greatly appreciated.
Relevant answer
Answer
Hello Dee,
Are you trying to organize information on one person's publications? On all publications related to some topic? As background for a study you're planning? Identifying publications from persons affiliated with a specific school or university? Or something else?
Perhaps if you could clarify your goal, one or more folks might be able to offer a more focused recommendation.
Good luck with your work.
  • asked a question related to Spreadsheets
Question
2 answers
I am working on the development of Evaluation tools for the Vocational Guidance of secondary students. I developed instruments to assess personality, personal interests, academic and occupational interests, with semi-automated reports in electronic spreadsheets. But I feel a lot of limitation with these tools. I hope that by transferring these elements to a system assisted by Artificial Intelligence, tasks that assess executive functions, academic and professional interests, values, self-efficacy perception could be included with the possibility of incorporating linguistic-cultural adjustments (my country is characterized by a important cultural diversity) for social inclusion and that the system be linked to a dynamic database on academic offer (university, intermediate professional and job opportunities, perhaps, later, with scholarships available in the list of institutions that offer these careers) .
The input would be structured and reactive tasks that feed the learning machine and shorten the predictive output of academic-professional recommendations.
I would like to know the impression they have about it and if anyone has had experience in similar projects that could advise me.
Relevant answer
Answer
it's a nice and rewarding job. Evaluation tools will help school managers to find out the shortcomings, weaknesses and potentials that can be developed by school managers
  • asked a question related to Spreadsheets
Question
1 answer
I want to make sure that I can use this one to convert my results.
Relevant answer
Answer
Dear Islam Chargui,
first: this question refers to a definition of salinity (the practical salinity scale PSS 1978, which is documented in https://salinometry.com/PDF/UNESCO37.pdf) which is to some extent outdated. The updated definition of salinity is given with the thermodynamic equation of state of seawater (TEOS-10). Some information on this change can be found on https://www.teos-10.org/
second: the PSS is actually defined not in terms of an absolute conductivity, but as a conductivity ratio, i.e. the ratio COMPARED to the conductivity of a certain KCl solution. This makes a lot of sense, because conductivity sensors can have bias, that can drift etc. So it makes sense, when measuring conductivity, also to measure now and then the conductivity of a standard solution. This is how it is done when you want to do it at high precision. The standard that is usually used is IAPSO standard seawater. An introduction to the practical measurements is found here:
Maybe this helps you to get started.
Cheers, Christoph
  • asked a question related to Spreadsheets
Question
8 answers
Epidite spread sheet
Relevant answer
Answer
Thank you very much for the recommendation.I will try it.@Michale
  • asked a question related to Spreadsheets
Question
1 answer
SBEDS is an Excel spreadsheet for blast analysis from US CORPS OF ENGINEERS
Relevant answer
Answer
  • asked a question related to Spreadsheets
Question
3 answers
Basically I have a large excel spreadsheet of all the reasons why a specific process isn't working. It contains about 1,000 reasons, each reason can be put under 1 of 4 categories. To go through all 1,000 and individually categorise them would take too long so what sort of sample size do people recommend using and can I deduce things like the confidence interval/accuracy of my results from this sample size etc. Thank you for your help in advance
Relevant answer
Answer
You could run these analyses in a 'loop'. In attachment an example for the Fisher exact test. You'll first need to convert your Excel data into an R data set (Rda).
  • asked a question related to Spreadsheets
Question
3 answers
Am making carbonized biomass value chain analysis and for such work, I am calculating carbon footprint.
I would therefore appreciate if anyone can share one (carbon footprint calculator e.g., spreadsheet) with me or direct me to a link/website where I can access one.
Thank you!
  • asked a question related to Spreadsheets
Question
2 answers
Hello,
The equipment software gave me the values of Mn and Mw but I would like to check it manually because it was not me who performed the analysis and the values the analysis company gave are very strange.
The company that performed the analysis provided me with the excel spreadsheet (attached). I believe that from the data it is possible to obtain the molecular weights manually.
Thanks a lot for the help.
Relevant answer
Answer
Dear Gustavo Zago, similar RG threads have already discussed your question, please have at them and the references therein. In addition to that the standards for calibration sometimes as misleading, please have a look at the third reference, may be it helps other general information in the attached files. My Regards
  • asked a question related to Spreadsheets
Question
12 answers
Hi,
I am looking for a spreadsheet-based water balance model for lake operation. Please don't refer me to papers. I am looking for an actual spreadsheet. Thanks
Relevant answer
Answer
Dear Hadi Hamaaziz Muhammed and dear Rafael Anleu I pm'ed you
Best,
Angelos
  • asked a question related to Spreadsheets
Question
1 answer
Hello all,
I have a large spreadsheet of antibody sequences, however in order to perform grafts properly, I need to know what numbering system was used (IMGT, Kabat, Chothia, etc). Does anyone know any software or web servers that can identify the numbering system used (preferably free resources)?
Thank you,
Theodore
Relevant answer
Answer
The SAbPred Server will renumber your sequences to your chosen numbering scheme, allowing you to work with a standardised numbering scheme:
My preferred scheme, of course, is the AHo scheme ; )
  • asked a question related to Spreadsheets
Question
5 answers
Iam a student of MPhil and new in research. I want some suggestions from my seniors that how to develop new formulas on excel spreadsheet as there are numerous videos on integration differentiation. if we want to work on partial and some higher mathematics then do i have to learn JS or visual basic.
Relevant answer
Answer
Thank you , then its useless to develop formulas on spreadsheet. Better to explore matlab
  • asked a question related to Spreadsheets
Question
4 answers
Can qualitative software, like Nvivo, be used for scoping reviews after the data has been extracted in an Excel spreadsheet.
What is the application for use?
Relevant answer
Answer
Look! Nvivo is just a software for analysis.
It is always the skill of the researcher to use the software for his purpose.
  • asked a question related to Spreadsheets
Question
2 answers
It is difficult to determine the volume % of Cpx, Olv, and Opx in mylonitic peridotites and very fine grained ones. Sometimes, Cpx, Olv and Opx % can be estimated from bulk chemistry based on CIPW norm.
I need trusted spreadsheet for CIPW calculation. Thanks so much.
Relevant answer
Answer
Hello, here is the spreadsheet for CIPW calculation.
Best regards, Zilong
  • asked a question related to Spreadsheets
Question
6 answers
I've applied a survey with approximately 200 innovative companies, using widely validated by previous studies innovation capabilities measurement instrument, likert-scale from 1 to 5
As the capabilities of the companies (all of the involved in University-Industry Cooperation) showed very high results, most values are concentrated on 4 and 5, so, skewness is really high. Smirnov-Kolmorov showed non-normal distribution for almost all variables (spreadsheet attached)
Sample have almost no outliers. And the thing is it was not possible to foresee ex-ante that firm's capabilites would be so high.
My data analysis technique will be QCA, however, I would like to perform first the CFA first.
Please, help me! Thank all of you in advance!
Relevant answer
Answer
Hi,
DWLS expects a normally distributed latent response variable underlying the ordinal response. And as simulations studies show that likert scales with >= 5 categories work fine I would rather use robust ML (that is by applying the Satorra-Bentler or Yuan-Bentler corrected versions of the chisquare test and robust standard errors. Here are some papers of which esp. Li (2016) may be highly relevant. He shows that DWLS is better in estimating loadings but weaker in estimating factor correlations. However, I would assume that your severe nonnormality would make the case for robust ML even more salient :)
All the best,
Holger
Li, C.-H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936-949. doi:10.3758/s13428-015-0619-7
Rhemtulla, M., Brosseau-Liard, P., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354-373.
Chou, C.-P., Bentler, P. M., & Satorra, A. (1991). Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: A monte-carlo-study. British Journal of Mathematical and Statistical Psychology, 44, 347-357.
Enders, C. K. (2001). The impact of nonnormality on full information maximum-likelihood estimation for structural equation models with missing data. Psychological Methods, 6(4), 352-370.
  • asked a question related to Spreadsheets
Question
3 answers
I need assistance please. I am still busy with my research proposal and my feedback from my professor is somewhat vague. Its as if he didnt read through my survey questions or look at the spreadsheet with all my thought processes etc and doesnt understand what my research project is actually about. I would like a third party to give me some advice. It could be my title needs editting or I did not clearly articulate my goals. Maybe I wasnt specific enough, but I feel if he read the survey he would understand better and be able to advise me better. Unfortunately we are in different countries and different time zones, and also different first languages.
Relevant answer
Answer
Yes you should ptetest, possibly with cognitive interviews
  • asked a question related to Spreadsheets
Question
5 answers
I am working on sulphide mineralization in BGC hosted by granites and mafic rocks. I want characterize the geochemical nature of the different sulphide phases.
Relevant answer
Answer
Please check the attached excel file, hope it can help you.
Best regards,
Mohamed Faisal
  • asked a question related to Spreadsheets
Question
2 answers
Dear authors,
Im trying to performa a temporal phase analysis of the CMJ as described previously (Power-time, force-time, and velocity-time curve analysis of the countermovement jump: impact of training) However, it is performed in LabView program.
I have tried to perform that analysis using Excel, but I am not able to normalize data and, therefore, compare different jumps expressed as %of jump.
Anyone have an spreadsheet or know to perform this analysis in excel?
Thanks in advance
Relevant answer
Answer
Am not sure, but will recommend your question so that the experts can clarify your question.
  • asked a question related to Spreadsheets
Question
4 answers
I have a large database of posts (Yaks) collected from the social media app Yik Yak (back when it was still active). I used a program to collect Yaks at random intervals from 50 randomly selected universities, stratified by US Region, "Locale" (rural vs urban), "Control" (private vs. public), and Size (large vs not large). We collected over 115,000 Yaks. My students used NVivo 11 Pro to code the data (in this case, we were looking at substance use related posts), and we ended up coding 1670 Yaks as into our Substance Use "Node." Now, I want to be able to view (and export) all 1670 of those Substance Use references, along with their associated attributes (region, locale, control, and size). We have been trying for days and can't figure out how to do this. It seems like a basic function that should be easy. Can anyone offer any help?
We want to export the data in order to run various chi-square and regression analyses, and I want the whole data set (all 1670) and not just the summary data (i.e. summed frequencies of substance use Yaks for each region). This way, the appropriate attribute values will be connected to each specific data point during the quantitative analyses.
Relevant answer
Answer
Hi Jason, I realize that you asked this question 4 years ago. Did you get an answer somewhere? I have the same problem/question and I also cannot figure it out. I agree that it does seem like a basic function that should exist (somewhere).
  • asked a question related to Spreadsheets
Question
4 answers
Dear Colleagues,
Another publication related question, does anyone have any experience of becoming part of another organisation like the IEEE, BCS or ACM? I am the programme chair for the European Spreadsheets Risks Interest Group (www.eusprig.org) and we are considering joining up with an appropriate partner - the motivation for doing so is to increase visibility and credibility - is this something anyone has any experience of or any views on?
Many thanks
Simon
Relevant answer
Answer
Thank you very much for this pertinent question or issue regarding your prospect of partnering or collaborating with organisations like the IEEE. My little knowledge of the IEEE is that you may easily collaborate partner wh them because are very involved in networking and publishing.
It has already regional networks globally and one of my articles was published by them based on their visibility and active networking the organisation which hosted my conference paper presentation when I presented in Botswana (Gaborone) in Southern Africa, some years ago.
As you might know, it has its own referencing style which is quite popular with many authors. For example, in Uganda, where I am, it has Chaired the regional network for Africa.
I suggest that you may contact the secretariat with a submitted proposal and follow the procedure for joining them because it is well laid out in their policies.
Best regards
  • asked a question related to Spreadsheets
Question
2 answers
I am conducting a meta-analysis of Hazard Ratios (HR), however, most of the studies presented only Kaplan Meier curves without the HR and Confidence Intervals (CI). I am extracting data manually from the curves and using Tierney et al. spreadsheet to calculate the HR. Nevertheless, I am concerned about the reliability of this method.
I have already tried some programs to extract numerical data from the curves, but none of them presented the number of censored patients at each time point to calculate the number of patients at risk.
Do you know any program/software able to do this? Or any that gives the HR and CI?
Thank you!
Relevant answer
Answer
Hi,
I got the same problem. I found this video very useful: https://www.youtube.com/watch?v=3t6hKgkAc1o&t=1261s
  • asked a question related to Spreadsheets
Question
2 answers
I am new to the software package and currently going through a learning curve.
The above file is not included in the data files associated with the version I am using. In an earlier software manual, I came across this Example: Example 3: Multiple and non-linear covariates, and producing species occurrence maps which use the file: Single-season example.xls.Can someone help me find it?
Relevant answer
Answer
Dear Pietro,
Thank you for your detailed answer.
Those two R packages would be quite useful. I new a bit about unmarked and didn't know about the other. I will try those.
Best,
Dulan.
  • asked a question related to Spreadsheets
Question
9 answers
I am struggling to solve the equation for X:
λ = A/X * (1 exp (-X/A))
λ and A are known values found from experimental analysis, say λ = 0.8 (varies 0.02-0.89) and A = 2 (ranges 1.25-2.5). Is it possible to solve the equation in Excel spreadsheet? I tried using Adds in option, but it is giving a wrong value. Thanks for your time and suggestions.
Relevant answer
Answer
  • asked a question related to Spreadsheets
Question
8 answers
Hello: I am trying to find a way to make it a bit more efficient to run the lab. I have looked in a several lab management programs (Findings 2; labfolder, and some others). It seems most work well for life science labs that are not heavy on behavior. We have a need to carefully manage the booking of rooms and appartuses for behavioral tasks, so we can use them maximally. It would also be handy to have the option to manage the animal colony. At the moment, we have different spreadsheets, and google calendar. This works OK, but a centralised solution would be better that also allows a bit of a project management (have a list of standard behavioral tasks or components of behavioural protocols, allowing you to build a new experiment, book the rooms for the times you need, put that all in a calendar, update project progress accordingly), and perhaps offers even some form of communication platform. Perhpas a bit much to ask, but I would be happy with some of these components.
Any suggestions aside from dream on?
Relevant answer
Answer
  • asked a question related to Spreadsheets
Question
3 answers
We all know it is possible to get path coefficients from correlation matrix (e.g., PATH ANALYSIS I: INTRODUCTION (ecu.edu) ). Using a simple solver on a spreadsheet, it is therefore possible to check the results of any article when a correlation matrix is provided.
The other way around, we can also from structural equation modeling coefficients obtain the correlations when they are not provided in a given article. Can I use this method to obtain correlations before using Hunter and Schmidt's meta-analysis method? Do you know a software to do it effortless because performing it using a spreadsheet is boring?
Relevant answer
Answer
Hi Adrien,
What you may be referring too, but I'm not sure, is to compute partial correlations on the basis of information from regression models. (NB. structural models are systems of equations).
Ariel Aloe (https://www.researchgate.net/profile/Ariel-Aloe) has done a lot of work on this, including cautioning against the use of partial correlations and other indices derived from regression coefficients.
  • 10.1080/00221309.2013.853021: Partial effect sizes in MA
  • 10.1002/jrsm.1126: inaccuracy of regression results replacing CC
  • 10.3758/s13428-018-1123-7: Concealed correlations MA
If you want a quick way to compute a partial correlation from a t-statistic, check Workbook 6 of Meta-Essentials (disclaimer: I co-developed) - see www.meta-essentials.com
  • asked a question related to Spreadsheets
Question
1 answer
dear friends from the rock mechanics field
Good day. I have what may seem a minor problem, but I cannot crack it somehow.
I have attached a page from Hoek & Brown (1997) paper, which gives elaborate spreadsheet formulas for simulation of triaxial data and conversion to Mohr-Coulomb parameters, in absence of actual triaxial tests. All the formulas are explained and mutually connected except one.
What is the formula for signt??
It is the normal stress which must be specified in order to calculate the tangent to the Mohr envelope. There is a myriad of formulas involved with the problem and I just don't see the answer. I would greatly appreciate assistance.
Cheers, Hrvoje Vučemilovič
Relevant answer
Answer
Hoek Brown criteria helps in determining rock failure conditions, while Mohr Coulomb criteria is related to plasticity of rocks and strength.
  • asked a question related to Spreadsheets
Question
7 answers
Hi everyone. I took a basic course on Markov Chains, and know a little about Monte Carlo Stimulations and Methods. But I never got to the part of spreadsheets.
If anyone can help direct me to a few non-technical, not to hard to read books on Monte Carlo Stimulations I would be grateful.
Relevant answer
Answer
The following some interesting books
Introducing Monte Carlo Methods with R
Springer-Verlag New YorkChristian Robert, George Casella (auth.)Year:2010
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Springer-Verlag LondonEnrico Zio (auth.)Year:2013
Handbook of Monte Carlo Methods (Wiley Series in Probability and Statistics)
WileyDirk P. Kroese, Thomas Taimre, Zdravko I. BotevYear:2011
Essentials of Monte Carlo Simulation: Statistical Methods for Building Simulation Models
Springer-Verlag New YorkNick T. Thomopoulos (auth.)Year:2013
Simulation and the Monte Carlo Method
WileyReuven Y. Rubinstein, Dirk P. KroeseYear:2017
Best Luck
  • asked a question related to Spreadsheets
Question
3 answers
I have estimated the parameters by Maximum Likelihood Estimation (MLE) and Probability Weighted Method (PWM). I wish to construct the L Moment Ratio diagram, to graphically demonstrate that empirical (L-skewness, L-kurtosis) coordinates of my financial asset sample lie close to GL distribution (say), but the picture is very clumsy in R. I want to customize it, make it neat and hence i need the freedom to work in spreadsheet. Besides, an excel sheet is more intuitive. Could you kindly sir share it? I shall be grateful to you. I am willing to cite you this work in my reference, and put in in my acknowledgement section of thesis which I shall send you a copy by next July. Please.
Relevant answer
Answer
Rudolph Ilaboyar I would like to know if I can also have access to your algorithm of L-moments, I am working on a frequency analysis and this tool would very useful for me. I promise to use it correctly and cite the corresponding author. Thank you, Camila Gordon, my e-mail is camila_alejandra.nocua_gordon@ete.inrs.ca
  • asked a question related to Spreadsheets
Question
4 answers
I have bulk rock major and trace elements data available to me.
Relevant answer
Answer
Dear Ratul Banerjee
you can use PG.PETROGRAPH (M.petrelli) or Geo-fO2. software.
geo-fO2 is a user-friendly software for analysis magmatic oxygen fugacity (fO2). It includes oxybarometers and thermobarometers based on compositions of common minerals (i.e., amphibole, zircon, and biotite) in intermediate-silicic magmas. Homepage of Geo-fO2: http://www.geo-fo2.com.
  • asked a question related to Spreadsheets
Question
1 answer
Hi,
I am generating long lists of per-cell data that I'd like to visualize as dot plots per condition. The problem is that my conditions are defined by letters rather than numbers. Or sometimes I have multiple different chemicals at different concentrations.
A typical spreadsheet has one (name) or two (name and concentration) columns that define the condition and multiple columns of various measurements. Each row is one cell. There may be hundreds of cells per condition and the number may vary between conditions.
I am having trouble picking the right type of table in Prism to generate the graphs that I want. Would anyone have an advice for me?
Thank you!
Relevant answer
Answer
Use R- programming, Cheaper and easy to accessible. You can use either excel or csv format. In Graph pad prism , Your explanation needs clarity in which what parameters to be used as calculation. I think , You should be able to do calculation in excel format which can be made as standard for future.
  • asked a question related to Spreadsheets
Question
10 answers
I am working on olivine, spinel, clinopyroxene and orthopyroxene. Can someone provide or assist me to find geothermobarometer spreadsheets for above mentioned minerals, single or two-mineral based geothermobarometers.
Thanks in advance
Relevant answer
Answer
Yes, they are generally formed under low pressure. However, we can still find out the pressure at which they form. I came to know that pressure can be calculated from the whole rock data, although this value is not that crispy like EPMA does.
  • asked a question related to Spreadsheets
Question
1 answer
We are trying to keep track of the MGH IHP Research publication record. We wanted to find a way to track and download the publications onto an excel spreadsheet without having to do it manually.
Relevant answer
follow
  • asked a question related to Spreadsheets
Question
6 answers
I got the Illumina paired-end 16S sequences results. I tried to analysis them with QIIME2, but unfortunately, I found myself stuck many times. Can anyone help me,and how much will it cost? Please inbox me
Relevant answer
Answer
Thank very much for your help, it is highly appreciated
جزاكم الله خيرا و ان شاء الله نتعاون مستقبلا
  • asked a question related to Spreadsheets
Question
6 answers
Data at rest in information technology means inactive data that is stored physically in any digital form (for example; databases, data warehouses, spreadsheets, archives, tapes, off-site backups, mobile devices etc.)
  • asked a question related to Spreadsheets
Question
2 answers
Spreadsheet software provides very little structural guidance as to how to employ it safely and effectively. Spreadsheet developers are free to employ a variety of structural components to interact with their users.
Much advice concentrates on hiding/protecting the components of the spreadsheet that make it work. How can we retain retain openness of the spreadsheet presentation?
Relevant answer
Answer
Alan McSweeney Thank you for your response. There is no doubt that the most important risks associated with spreadsheets come from the commercial and administrative sector. Finsbury solutions and Spreadsheet Detective as just two approaches to review and validation. There are others if you search hard enough. However, all these approaches are both sophisticated and EXPENSIVE for an individual spreadsheet builder.
The vast majority of spreadsheet users would like to spot their mistakes (or just possible mistakes) and decide for themselves what to do about it. My approach (developed in the months since my original post) has been to create a simplified map of a spreadsheet for visual inspection. It is not a universal panacea but it is free and maintains historical record of the inspections that you have undertaken.
Should you want more details, please come back to me.
  • asked a question related to Spreadsheets
Question
9 answers
Spreadsheet has been found in solving process engineering problems and its built in function capabilities, make data manipulation much simpler and faster.
Relevant answer
Answer
Dear all of you, today I listened to theories and only theories. Unfortunately I live in a world on the edge of reality. On the one hand, we are scholars by fields and on the other, we are trivial theorists. You, like me, are people who have a superior mind, but your knowledge is for spheres of excellence. As a linguist I want to aggregate myself and ask you a question. The spreadsheet can be handled only x numerical facts, as well as a writing sheet reproduces the constant headwords of thought. I ask you great minds quantum words can be related to numbers and put them on an equal footing in a spreadsheet? e.g.: AA BB = 2 times A and 2 times B. By repeating this action 6 times how can I make it explicit in mathematical language? In linguistics I say that I get an alliterative phenomenon of alliterative concateness and you? Sorry but I need an urgent answer-
  • asked a question related to Spreadsheets
Question
2 answers
Dear QIIME2 users, I need help in preparing my map file. The google spreadsheet available at QIIME2 page is ( https://chmi-sops.github.io/mydoc_qiime2.html) confusing. Can I merge the forward and reverse index sequence in the same column or do I want to feed in separate columns? I need help if anyone prepared a map file recently. Please inbox me.
Relevant answer
Answer
You can use my metadata file as template, but is un spanish.
I don't know from recognize that tutorial from the official Qiime2 webpage (https://docs.qiime2.org/2020.6/tutorials/metadata/)
Qiime2 is going to merge your R1 and R2 files based on the barcode. So you don't need to feed a barcode for each read, rather a barcode and index for each sample
Say ,Sample1_R1_AATT Sample1_R2_AATT,
Sample2_R1_ATTT Sample2_R2_ATTT
Should be on you medatada as,
Sample1 AATT
Sample2 ATTT
  • asked a question related to Spreadsheets
Question
9 answers
I have tried to calculate kinetics in spreadsheet by Method of Coats Redfern but i did'n get the correct result of Ea. I got negative value instead of positive. I think my formula is'n correct. Can someone help me?
Relevant answer
Answer
Hi Gunel T. Imanova and Julián E. Sánchez-Velandia this is example of my calculation, please have a look and comment. Thank you in advance
  • asked a question related to Spreadsheets
Question
6 answers
I have been using Aabel (Gigawhiz) plotting and statistical software. However, it is Mac OS centric and cannot be installed on a PC. I like this software because it enables me to directly select a data point in a graph which then highlights the point in the linked spreadsheet. This is very useful when exploring data and anomalous values. However, it is only available for Mac and I am now mainly PC based. As such, can anyone recommend a PC plotting and statistical software package that provides similar functionality? Thanks in advance : -)
Relevant answer
Answer
You can find this option in "Tecplot" software. If you should use that, you would use "probe At" option.
Goodluck.
  • asked a question related to Spreadsheets
Question
4 answers
This is to complete a visualization of my Professor, Dr. G's research on his fish data.
Relevant answer
Answer
Personally, I find this the simplest way to import data from Excel to R:
1-) Selection and Copy the data you would like to import from Excel to R
2-) Once copy, go to R and enter this: data<-read.table ("clipboard", h=T)
3-) Finally, you can view your data in R by typing: View(data)
Enjoy.
  • asked a question related to Spreadsheets
Question
3 answers
What analytic software would you recommend to examine a data set I created that lists the faculty members making up 1400+ thesis committees over a ten year period?
I'm interested in seeing how often the same faculty members have served on committees over this time period. I'm also interested in coding faculty members by discipline and am hoping this layer of meaning can also be part of my analysis.
Any recommendations for how to best analyze a spreadsheet of thesis committees with these goals in mind would be appreciated!
Relevant answer
Answer
I would recommend you to use Python software through Jupyter Notebook(IDE). There are nice libraries that can easily perform the task.
  • asked a question related to Spreadsheets
Question
4 answers
In response to several inquiries over the past few years, I have undertaken the task of updating the thermodynamic properties of steam. It has been 25 years since the IAPWS first released SF95 so that this effort is long overdue. My goal is to extend the range of applicability up to 6000K and 150GPa, considerably above any existing formulation. The data on which to base this extension has been available since 1974. While I have not completed the task, I have made considerable progress. As some researchers have expressed an urgent need for this work, I am releasing preliminary results. All of the data plus an Excel AddIn are contained in the following archive http://dudleybenton.altervista.org/miscellaneous/AllSteam104.zip, which will be updated as the work progresses. There is an installer to facilitate selection and placement of the correct (32-bit or 64-bit) AddIn, although this is unnecessary, as the libraries (AllSteam32.xll and AllSteam64.xll) are included. The current implementation is piecemeal and not optimal, but it does function. When complete, I will also release the source code. The average error in calculated density (compared to experimental) for the 2930 data points from 18 studies (see spreadsheet H2O_PVT_data.xls) for the SF95 and 2020 formulations is 0.68% and 0.46%, respectively, The maximum error is more telling, at 267% and 40%, respectively, as shown in the attached figure. I welcome your discussion.
Relevant answer
Answer
I personally don't have a need for steam at such high pressures, but several people have asked for this in support of their research. One guy was working with deep well fracking, another was working with shock waves, and a third was doing some sort of astronomical calculations related to Jupiter. Of course, we don't know for sure if there is any water on Jupiter. A very high temperature application is when they squirt water into the base of the launch pad beneath the Saturn V booster to keep the cement from disintegrating and the steel from melting. I know a guy who is working for a contractor that's supposed to provide some design calculations for that. By the way... My first boss when I got out of school (Dr. William R. Waldrop) is the guy who wrote the 3D transient computer model of the Saturn V booster, which was a deliverable in Lockheed's contract with NASA. It was a FORTRAN program written on punch cards. Who would have thought they'd still be using the same design?
  • asked a question related to Spreadsheets
Question
3 answers
Anyone know about any freely available software or code that can optimise protocol sequences for 3D unconventional electrodes setups, including borehole electrodes?
The closest research I found is the BGS optimised algorithm series (led by Paul Wilkinson) but I don´t think they are freely available - at least I couldn't find it anywhere.
I am also familiar with Electre Pro but it is neither free nor an optimisation framework.
I also know the SEER spreadsheet, but it is 2D, and very rigid in several aspects (electrodes spacing, arrays and total number of electrodes). It also does not allow exporting the protocol assessed.
Thanks.
Relevant answer
Answer
Hi Bruna,
I had the same question a few years ago.
I emailed Paul Wilkinson at the BGS (as suggested by Thomas Hermans).
He directed me towards the following papers.
I applied the methodology presented in the papers with a Matlab Code available on Github (the comments are in French). You can find the published work here :
Cheers
Adrien
Here are the articles that were useful for me to do the optimization :
- P. Stummer et al., “Optimization of dc resistivity data acquisition : real-time experimental design and a new multielectrode system,” IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 12, p. 2727–2735, 2002
- P. Stummer, H. Maurer et A. G. Green, “Experimental design : Electrical resistivity data sets that provide optimum subsurface information, ” Geophysics, vol. 69, no. 1, p. 120–139, 2004
- A. Furman, T. Ferré et A. W. Warrick, “Optimization of ert surveys for monitoring transient hydrological events using perturbation sensitivity and genetic algorithms, ” Vadose Zone Journal, vol. 3, no. 4, p. 1230–1239, 2004
- P. B. Wilkinson et al., “Improved strategies for the automatic selection of optimized sets of electrical resistivity tomography measurement configurations, ”Geophysical Journal International, vol. 167, no. 3, p. 1119–1126, 2006
- P. B. Wilkinson et al., “Practical aspects of applied optimized survey design for electrical resistivity tomography,” Geophysical Journal International, vol. 189, no. 1, p. 428–440, 2012
- M. Loke et al., “Optimized arrays for 2d cross-borehole electrical tomography surveys, ”Geophysical Prospecting, vol. 62, no. 1, p. 172–189, 2014
- M. Loke et al., “Computation of optimized arrays for 3-D electrical imaging surveys, ”Geophysical Journal International, vol. 199, no. 3, p. 1751–1764, 2014.
- P. B. Wilkinson et al., “Adaptive time-lapse optimized survey design for electrical resistivity tomography monitoring, ”Geophysical Journal International, vol. 203, no. 1, p. 755–766, 2015.
- M. Loke et al., “Optimized arrays for 2d resistivity surveys with combined surface and buriedarrays,” Near Surface Geophysics, vol. 13, no. 5, p. 505–517, 2015
- M. Loke et al., “Optimized arrays for 2-D resistivity survey lines with a large number of electrodes,” Journal of Applied Geophysics, vol. 112, p. 136–146, 2015
- F. M. Abdullahet al., “Assessing the reliability and performance of optimized and conventional resistivity arrays for shallow subsurface investigations,” Journal of Applied Geophysics, vol.155, p. 237–245, 2018
- F. M. Wagner et al., “Constructive optimization of electrode locations for target-focused resistivity monitoring,” Geophysics, vol. 80, no. 2, p. E29–E40, 2015
- S. Uhlemannet al., “Optimized survey design for electrical resistivity tomography : combined optimization of measurement configuration and electrode placement,” Geophysical JournalInternational, vol. 214, no. 1, p. 108–121, 2018
- D. Smyl et D. Liu, “Optimizing electrode positions for 2d electrical impedance tomography sensors using deep learning,”arXiv preprint arXiv :1910.10077, 2019
  • asked a question related to Spreadsheets
Question
3 answers
I have inputted data into my excel spreadsheet. The data is descriptive. I was wondering if there was a way of quantifying this data, such as finding common words etc.
Any suggestions are appreciated :)
Relevant answer
Answer
To identify if a word appears in an MS Excel cell of text, you could write a formula. For example, if you're looking for "the" in cell A2, write =FIND("the",A2), and it returns the number of characters into the text the word begins. Write =IF(ISERROR(FIND("the",A2),0,1) to return 1 is the cell contains the text and 0 if it does not. If you would like to check for the occurrence of at least one of a set of terms (e.g., his, her, or their), then write =IF(AND(ISERROR(FIND("his",A2),ISERROR(FIND("her",A2),ISERROR(FIND("their",A2)),0,1). Przemysław's suggestion would definitely work to count the number of occurrences of particular words, maybe if you used MS Word and replaced spaces with paragraph marks and then pasted to Excel. I'm very curious too, Tarik, so I hope someone adds more elaborate, authoritative, answers than us. Best wishes with your project. ~ Kevin
  • asked a question related to Spreadsheets
Question
2 answers
Please, How can we apply the GLUE or SUFI-2 method to the WEAP model to estimate the uncertainty? GLUE and SUFI-2 are included in SWAT-CUP, Can I find the methods in other software or excel spreadsheet to apply it?
Thanks in advance
Relevant answer
Answer
Hi Changle,
Thank you so much, I am going to check that.
Regards
  • asked a question related to Spreadsheets
Question
4 answers
Evapotranspiration estimation
Relevant answer
Answer
Thanks for the reply sir
  • asked a question related to Spreadsheets
Question
8 answers
need a good and emerging answer... Thanks
Relevant answer
Answer
Optimization of distillation columns is a problem of mixed integer no linear programming type. In-house optimization methods of both Aspen and Hysys can't handle integer variables like # of trays, feed stage and among others (i.e side outlet streams). Therefore, I recommend the use of external algorithms, either of deterministic nature such as BARON (highly recommended) or stochastic nature like Genetic Algorithms (NSGA II, highly recommended), for handle integer variables.
Optimization problem statement is a key step for ensure good performance of optimization algorithms. I suggest cost of distillation column (including equipment cost around distillation column such as compressors, pumps, and heat exchangers) like objective function to minimize because all variables have economic influence.
There are different tools to ease optimization tasking by external algorithms using Aspen plus or Hysys simulations. For example, in the following link, you can find Matlab code to connect Aspen Plus and Matlab through COM technology and use widely used Matlab optimization algorithms:
Its possible connect Aspen plus or Hysys to any programming language with COM capabilities.
I could send you and example of Hysys - Matlab Connection.
Best Regards,
  • asked a question related to Spreadsheets
Question
5 answers
I am struggling with measuring inequality in health-workforce between regions based on number of workers in each region and region population using Atkinson index (spreadsheet is in attachment). Is there anyone willing to help me where I made mistake?
Thanks,
AM
Relevant answer
Answer
I am really interested in the work.
  • asked a question related to Spreadsheets
Question
4 answers
I have carried out an experiment on cells from the same cell line, grown in two plates and exposed to different conditions (Control vs Treated)
When I collected the cells after n days of treatment, I carried out PCR targetting a specific gene, then normalised to a housekeeping gene, I carried out analysis using the DDCT method.
I now have the DCT values in a spreadsheet waiting for statistical analysis- however, I am not sure which test to use, or whether I should use DCT or DDCT values.
My DCT results seem to tick the boxes for a non-parametric test, but I am not sure if it should be paired or unpaired. My logic is that they could be paired, since they came from the same cell line, or that they could be unpaired since I am comparing Day n (control) vs Day n (treated).
Results:
Control= 12.59 11.19 12.31 10.89 11.08 11.32 10.97 8.48 10.25
Treated= 11.28 10.98 10.35 11.39 10.36 10.83 10.77 7.91 9.8
DDCT values:
-1.31 -0.21 -1.97 0.49 -0.73 -0.49 -0.2 -0.57 -0.45
Please let me know if you would like more detail. I'm still something of a statistics beginner, so I'm not sure how much depth is required to be able to answer this question.
Relevant answer
Answer
If you take the data as paired or not should follow from your experimental design. Is there any source of variance that acts on pairs of treated and control samples? For instance, one cell culture was grown, then splitted in two vials, one of which was treated. This gave the first pair of values. Then another, culture was grown independently, split, treated, measured. That would be the second pair etc.
Calculating individual ddCt values makes sense ONLY when you do have a pairing (and when it is given which two values are a pair). Then you would test if the mean of these ddCt values is zero, using a one-sample t-test, which the very same as using a paired-sample t-test on the two groups of dCt values.
If the data are not paired, then you would test the mean difference of the two samples of dCt values using a two-sample (independent-sample) t-test.
  • asked a question related to Spreadsheets
Question
9 answers
I downloaded climate data from CCMA for the model CanESM2. I wanted to use this data as inputs for water quality models. Unfortunately, the data is in .nc format and is not usable in a spreadsheet such as excel. Is anyone could help me ? Thanks in advance for your help and advises!
Relevant answer
Answer
You can use R to read NetCDF and save in to Excel - take care that you have an excel version which can handle a lot of data.
  • asked a question related to Spreadsheets
Question
4 answers
Articles I have written mostly use data from other people. One possible exception was my calculation of lexical growth rates (lexical scaling on arxiv and RG), which outside of glottochronology, no one else seems to have much bothered with. To calculate lexical growth rates I used historical dictionaries of the English language. Collecting, adjudging, and organizing words in the English lexicon is a data project much vaster in scope than merely taking word counts and calculating rates. It seemed to be that in this era, there are all kinds of sources of data which a person can use as a basis for theory. I performed no experiments (unless spreadsheet calculations and forming equations as theoretical experiments count). I instead find an abundance of data. The data I found were susceptible of new theoretical investigation. So I wonder: if data in this computer age is increasing by vast amounts, can theory keep up? Will AI remedy that possible deficiency?
Relevant answer
Answer
Generally speaking, theories guide the development of research and the data collected in research are grounded in specific theories. Therefore, data are the by product of theories and there seems to be no relationship between the magnitude of data and the theories inspired by them.
  • asked a question related to Spreadsheets
Question
3 answers
I'm looking for a dataset containing the seismic activity at Mt Etna between 2000-2010, and another dataset (or the same one) containing the same data for Kīlauea. Preferably they could be downloaded as a txt file or onto an Excel spreadsheet.
Relevant answer
Answer
Hi Miriam,
for Kilauea data (and global seismic data potentially containing data for both Etna and Kilauea) you could also check on the USGS websites:
Good luck
Simon
  • asked a question related to Spreadsheets
Question
1 answer
This arises from issues I encountered in 2007. From September 2005 to the end of May 2007 I was looking for the base of a logarithm that would connect the rate of English lexical growth, about 3.39% per decade to the rate of divergence measured by Morris Swadesh for related Indo-European languages, which he found to be less than 14% per thousand years. I was looking for a number, but after many spreadsheets and failed guesses found that a network’s mean path length (mu, say) worked as the base of the logarithm. (I wrote a paper in early 2008 about this, on arxiv.)
I noticed this peculiarity about Clog (n) where n is the number of nodes in the network, and C is the clustering coefficient. From one perspective, if there are n nodes, then log (n) = k is the number of degrees of freedom in the network (or ensemble) relative to the mean path length mu. But at the same time, log (n) = k could also represent k time periods occurring for a set of nodes equal in number to the mean path length mu. So which is it? Is k the degrees of freedom for an ensemble? Or does k represent the length of time for a single cluster of mu nodes? Is this an irrelevant mathematical curiosity? Or, is there some connection between k as degrees of freedom for n nodes relative to mu and the way that time works? Why are the number of degrees of freedom for n nodes equivalent to the degrees of freedom relative to a single cluster of mu nodes over k periods of time. For example, does this imply that an event that occurs for an entire ensemble can in some way be emulated by an event repeated over k time periods for mu nodes? Perhaps it is nothing But it seems that J. Willard Gibbs in his Elementary Principles of Statistical Mechanics used a kind of variation of this when he calculated the statistical distribution of multiple copies of the same ensemble. Do these mathematical aspects indicates something about the nature of time, that a period of time for mu nodes can correspond to an event at a point of time for an entire ensemble? Does that seem to echo Minkowski’s notion of space time?
Relevant answer
Answer
1. Logic of the Second Law of Thermodynamics: Subjectivism, Logical Jump, Interdisciplinary Argumentation.
2. New thermodynamics pursues universality, two theoretical cornerstones:
2.1 Boltzmann formula: ro=A*exp(-Mgh/RT) - Isotope centrifugal separation experiments show that it is suitable for gases and liquids.
2.2. Hydrostatic equilibrium: applicable to gases and liquids.
3. The second and third sonic virial coefficients of R143a derived from the new thermodynamics are in agreement with the experimental results.
3.1. The third velocity Virial coefficient derived is in agreement with the experimental data, which shows that the theory is still correct when the critical density is reached.
4. See Appendix Pictures and Documents for details.
  • asked a question related to Spreadsheets
Question
1 answer
I am currently trying to use the spreadsheet provided in the paper for calculating the melt pool size for the other materials. My problem is that I am not able to find the right relation to relate B and p with u and P. Could someone please help me with this.
Thanks
Shivam
Relevant answer
Answer
Following
  • asked a question related to Spreadsheets
Question
7 answers
Hello all,
I am looking to calculate Cohen's d for effect size, ideally using a reliable excel spreadsheet.
I understand the equation for d = M1 - M2 / SDpooled, but using a spreadsheet would be much quicker
Also, on the same note, my aim was to calculate d either the equation above using the mean and SD differences between pre-and-posting test measures. So if Group A scored 23.0 in test 1 and 24.5 in test 2, the mean difference I would use would be 1.5. If Group B scored 25.4 in test 1 and 26 in test be the equation would be
1.5 - 0.6 / SDpooled...I hope that makes sense.
Or use independent sampled t-tests using the mean difference between pre-and-post testing measures. Does either seem like a sensible method of calculated d?
You help and advice would be greatly appreciated.
Regards,
Lee
  • asked a question related to Spreadsheets
Question
3 answers
Am starting up a laboratory and want to have lab inventory management in place. Have been use to having spreadsheets on google sheets to keep track, but there must be something better out there so that you can keep track of amounts of reagents left, and have better quality control over stock.
What's the best one out there?
Relevant answer
Answer
Hi Daniel,
Have you looked into using a Kanban system of colour coding. It’s a very simple and easy tool set to utilise with the lab environment. Along with the other suggestions it could be a great addition.
Message me if you would like more information, all the best with your research.
Regards
Martin
  • asked a question related to Spreadsheets
Question
3 answers
Hi There,
I administered an online questionnaire and all my responses have been properly coded.
However, I am facing a couple of issues.
1. Non-response: I understand that missing fields are common in questionnaires. Is it necessary (some textbooks recommend) to replace the '-' (dash) with a '999' when cleaning up the data? I am afraid that the '999' affects my analysis.
2. Incomplete questionnaires: Should incomplete questionnaires be included as part of the analysis or are they considered void? If considered void, is it recommended that I delete the cases or leave the spreadsheet like how it was when i exported it to SPSS?
GREATLY appreciate any advice. Thank you.
Relevant answer
Answer
Ignore those values
  • asked a question related to Spreadsheets
Question
8 answers
Hello,
I am working on stratigraphy and lithogeochemistry of a VMS-hosting sequence of Paleoproterozoic volcanic rocks and would like to plot drillcore samples that I collected into IoGas and Geoscience Analyst. I only have the collar location, drillhole orientation and sample depth, however both softwares require the ZYX coordinates to display the sample accurately. I know these calculation can easily be done with Gocad, Target or other 3D mining softwares, but I am wondering if there is another (cheaper) option where I could process small batch of data on a need basis. It does not have to be something that takes into account all the drillhole deviation, I am not looking for that kind of precision. Thank you.
Simon
Relevant answer
Answer
Simple trigonometry and direction cosines.
The z-coordinate of your sample = z(collar) - depth(sample)*sin(DHdip), where
depth(sample) = halfway point between top and bottom of sampling interval and
DHdip = drillhole inclination angle from horizontal.
Sample x (Easting) and y (Northing) are given by:
x = x(collar) + depth(sample)*cos(DHdip)*sin(DHazi)
y = y(collar) + depth(sample)*cos(DHdip)*cos(DHazi)
As you know, I guess, you have to express the angles in radians for the trigonometric functions in Excel, so divide angles in degrees by 180 and multiply by pi.
You can correct for DH deviation using one of the common correction formulas. I have made an Excel spreadsheet that does that using the minimum curvature algorithm. I can send you a copy when I get back to my office computer on Monday. Haven’t checked the link William sent you above, but I guess you should find your answer there too.
  • asked a question related to Spreadsheets
Question
3 answers
Dear everyone,
I am trying to calculate the pore volume using the alpha-s method and have plotted the data from the sorption analysis against a reference with similar surface chemistry. I understand that I can get the micropore properties at low pressures. However, when I see references, I see that the micropore volume from the plot is above 3 times higher in order than those I usually see in references. Is there a formula to be used to calculate the micropore volume from the values in the plot? Please find attached my spreadsheet/a picture of my plot.
Kind regards,
David
Relevant answer
Answer
Also, you can use t-plot method
  • asked a question related to Spreadsheets
Question
7 answers
I need these file regarding my research
Relevant answer
Answer
Please find attached herein the spreadsheets for tourmaline and chromite. I am looking for the other 2 minerals; corundum and zircon, if I find them, will send to you immediately.
Regards.
Ahmed
  • asked a question related to Spreadsheets
Question
5 answers
hello
i testing whether some firm-specific variables are correlated with firms' leverage ratio using panel data. i want to include dummy variables that are time-invariant in my regression model. is that correct? how to do it? is it by adding a new column for each dummy variables in my spreadsheet? e.g. industry classification
thanks
Relevant answer
Answer
the purpose of the dummies is to test their significance and sign of their effect on the dependent variable.
unfortunately my econometric knowledge is limited. despite being essential for accounting and finance postgraduate, it was not included in my taught part of my program of study.
  • asked a question related to Spreadsheets
Question
1 answer
Dear Monica,
Hi,
What are the specific objectives of this project? I have some experience with financial management theory and spreadsheet development, perhaps I can be part of the action.
Regards,
Ernest.
Relevant answer
Answer
Here are 6 types of objectives. Does your project have objectives that cover all these categories?
1. Financial
Financial objectives are normally relatively easy to put together and you will find your sponsor is keen to make sure that if your project is going to make the company any money that this is record adequately in the project objectives. Your project may deliver a clear financial return (for example, launching a new product to the consumer market) or make a financial saving (such as closing an underperforming office).
2. Quality
There may be some quality objectives for your project, such as delivering to certain internal or external quality standards. Quality objectives also manifest themselves in the form of process improvement projects that aim to reduce defects or increase customer satisfaction somehow. You may find that quality objectives are included in your quality plan, so you can take them from there and include them in the main body of your project documentation (or vice versa, as you will probably write the quality plan after your charter).
3. Technical
Companies already have technology in use so a technical objective could be to upgrade existing technology, install new technology or even to make use of existing technology during the deployment of your project. Technology comes in different forms so this could include mobile devices or telephones as well as hardware, software and networking capabilities.
4. Performance
Performance objectives tend to be related to how the project will be run, so could include things like delivering to a certain budget figure or by a certain date, or not exceeding a certain number of resources. You could also have performance objectives related to achieving project scope, such as the number of requirements that will be completed or achieving customer sign off.
5. Compliance
Regulatory requirements form compliance objectives. For example, there could be the obligation to meet legal guidance on your project or to comply with local regulations. A construction project could also have the objective to meet or exceed health and safety targets.
6. Business
Of course! This is the main area where you are likely to find project objectives and it relates to what it is that you are doing – the key drivers for the project. Business objectives would be things like launching that new product, closing that office or anything else that is the main reason for delivering the project.
All these objectives have to be prioritised. Most people would say that business objectives are the most important, as without these you don’t really have a project. But within the business objectives you could have multiple sub-objectives and these also need to be prioritised.
  • asked a question related to Spreadsheets
Question
11 answers
I'm going to perform seismic analysis on LUSAS, FEM software. Downloaded ground motion data (accelerograms) from CESM database in the form of a text file. I have no clue how to move forward. I just know that I need two columns in a spreadsheet where one represents time, while the other one- acceleration.
Any help would be appreciated!
Relevant answer
Answer
Here you go, you should be able to work it out from this example file I made.
If you're smart you'll change the equation for the list under "column" so that you don't even need to split the time and acceleration columns.
  • asked a question related to Spreadsheets
Question
3 answers
Hello,
I recently had a batch of addresses geocoded by Texas A&M geocoding service, and I am quite confused as to how to get all those dots to appear on my map. When I add the map to ArcMap table of contents, it just appears as a spreadsheet. Do I need to join it to an already existing layer, such as a tigerfile? When I right click on the geocoded spreadsheet in the table of contents, it asks me if I want to geocode the addresses, which is a no, but then it says displays xy coordinates. Is that it? When I tried to do this it just said something about can't do this, because there are no object IDs, or something like that, and after clicking OK, only a single coordinate dot appeared on my map! I have over 5,000!
Thank you
Relevant answer
Answer
I haven't done geocoding since college, but here's a good place to start if you're working with addresses rather than coordinates:
  • asked a question related to Spreadsheets
Question
27 answers
I would like to calculate fwhm from xrd data (2 theta ~ intensity) I was given in excel spreadsheet format. I also happen to have X'pert highscore, but unfortunately I don't have any idea how to import the xrd data into the software. Can anyone please help me with this problem? 
Relevant answer
Answer
I do a similar thing.
I make sure it is just two columns of data without headers in excel, save as *.prn, manually rename the file *.asc, and then highscore is able to open it.
  • asked a question related to Spreadsheets
Question
3 answers
Hello,
So I am having some spreadsheet issues, using excel. So I'll get right to it.
I am keeping population count data of every county in the USA.
However, I do not have population data for Puerto Rico.
I would like to take my spreadsheet of every USA county, and delete all counties/municipalities that belong to Puerto Rico, or any other U.S territories outside of the 50 states.
How can I do this?
Is there some way to take my spreadsheet that has info for only 50 states counties, select them, then go to the spreadsheet that has info for all 50 states counties, plus territories, and tell excel, hey, delete all rows that don't match the entries of my other spreadsheet?
Thanks!
Relevant answer
Answer
A filter on the page with non-US data would be easiest. Unselect those variables.
If you need to integrate, use vlookup.
  • asked a question related to Spreadsheets
Question
5 answers
Hello,
So I am trying to do some excel gymnastics here and am having a lot of trouble sorting my data and figuring out how to even explain what it is I want to do. Basically, I have an original excel spreadsheet and a new excel spreadsheet, which is just an update of the original, containing more data. So the new spreadsheet has four more rows than the original. How do I take the new column and have it replace the original column, while maintaining the order?
I'll try and help visualize the issue here with a made-up example regarding number of universities in each state:
Original spreadsheet (2016):
FID State Universities
1 AL 124
2 KY 155
3 CA 166
4 NY 176
5 UT 98
New spreadsheet (2017):
State Universities
AL 127
MI 133
AZ 188
KY 150
CA 166
TN 145
NY 179
UT 98
So now we have 2017 data for the original states, and also for some new states. How do I take the data from the 2017 spreadsheet and merge it into the 2016 spreadsheet, while keeping the same order with reference to FID?
Thank you!
Relevant answer
Answer
To be exact, 50 states + District of Columbia. Still easy to handle data, I think.
  • asked a question related to Spreadsheets
Question
2 answers
I have all of my 38 years worth of research from the archives and publications archives on an excel spreadsheet I am looking for someone you can tell me how to load it here on research gate.
Relevant answer
Answer
I finally figured it by doing it as and attachment.
  • asked a question related to Spreadsheets
Question
3 answers
I need to look at butterfly UKBMS transect data and, put simply, due to an syncing error on my uni's part, we've been given an assignment asking us to do stats tests even though we haven't even begun learning about stats. So I've figured out I need to do a Chi-squared since I'm dealing with categories and enumerations. Problem is, not having done a single stats test before, this is rather overwhelming. I've been given spreadsheets for 2008-2017 with counts of around 30 butterfly species for, each year containing 1-26 weeks and counts for each of those weeks.
What hypothesis could I test with Chi-squared? I've read into the tests and have a basic grasp of what I'd need to do for the test itself (association, goodness of fit, cross tabulation depending on setup), but just not sure how to go about forming a hypothesis for that much data or how to really set it up.
I did a goodness of fit test for the total counts for the Large Skipper for 2008-2017 and it comes out as highly significant. Same for the association test I did comparing July mean temperature and Large Skipper abundance for 2008-2013. Problem is, I'm not sure what any of this means if I'm being quite honest. For example, association test gives the highest contribution to 2011 (over 50%). This is because there are 77 Large Skipper for that year and there just so happens to be a higher mean temp then. There could be a thousand reasons for their having recorded 77 Large Skipper.
Relevant answer
Answer
Right. I think I realised test for association isn't what I wanted. I've simply done a goodness of fit, but I still don't know where it indicates that there would be a difference. Is it in the low and high chi-sq number? My Null hypothesis is that there will be less variation in the data for a generalist butterfly than a specialist one, since generalists are more stable and abundant. The standard deviation for the two sets seems to suggest there is a greater variation for the specialist, which is what I'd expect. Just now sure how to verify in a stats test.
  • asked a question related to Spreadsheets
Question
10 answers
In May 2017, I posted my first research regarding to the relationship between prime number and Fibonacci number https://www.linkedin.com/pulse/relationships-between-prime-number-fibonacci-thinh-nghiem/ I have chance to go further in this subject. In detail, I realized that a prime number can be analyzed into sum of many Fibonacci numbers. Below are some examples: 29 = 21 + 3 + 5 107 = 89 + 13 + 5 1223 = 987 + 233 + 3 I have successfully analyzed the first 1,000 prime numbers with above methodology. Calculation can be found in https://docs.google.com/spreadsheets/d/1sGmyr9dZwLhfFWcSgwviwm2X838h0CQF4KWRqX_eXkA/edit#gid=685523897 I have tried unsuccessfully to limit the series up to only 3 Fibonacci numbers. As you see in my shared worksheet, some prime numbers are calculated to 6 or even 7 Fibonacci numbers. I expect that in next research, a simpler formula between these types of numbers can be discovered. All feedback is welcome. Regards,
Thinh Nghiem
Relevant answer
Answer
Dear Thin,
Nice observation.
In fact, one can prove the following general result:
Theorem: If n > 1, then n equals a finite sum of Fibonacci numbers.
The proof is simple:
If n is a Fibonacci number, there is nothing to proof
n = Fm = Fm-1 +Fm-2
( by the recurrence of Fibonacci)
If not, choose the closest Fibonacci Fm to n,
that is, n = Fm + n1.
Now if n1 is Fibonacci number , done.
n = Fibonacci + Fibonacci.
If not, proceed the same for n1. choose the closest Fibonacci Fk to n1
n1 = Fk + n2 ,then n = Fm + n1 = Fm + Fk + n2
Keep on the same procedure for n2 in decreasing order to reach the smallest Fibonacci in the sequel and the proof is complete.
Example 1. n = 354 224848179 261915096
The closest Fibonacci number to n is F100 = 354 224848179 261915075
n = F100+ 21
the closest Fibonacci number to 21 is F7 = 13
then n = F100+ 21 = F100+ F7 + 8 = F100+ F7 + F6 .
One May investigate the problem:
What is the smallest sum of Fibonacci numbers that represent a given positive integer n?
Best wishes
  • asked a question related to Spreadsheets
Question
3 answers
The CBE (http://comfort.cbe.berkeley.edu/) website is difficult to use for big sample data. I want to calculate ASHRAE 55, EN-15251 and adaptive thermal comfort of 1000 samples. Is there any source to download a spreadsheet to make calculations faster?
Relevant answer
Answer
For adaptive thermal comfort, depending on which model you want to utilize, you can easily construct your own spreadsheets since the formulation is just a linear equation between outdoor temperature and comfort temperature. For PMV, since the equations are non-linear and implicit, I am not sure if a ready-made spreadsheet exists.
For implementation of comfort models in a programming environment, you can check the "comf" package of R.
  • asked a question related to Spreadsheets
Question
6 answers
Is there any way to draw/show routes in google map from excel spreadsheet? I have some O/D routes of bike (O/D latitude and longitude in excel) which need to draw in google map and convert to shapefile. Is there any way to draw all these routes at once? I can draw this manually one by one, but I want to know if there is way to draw all routes at once. Thanks.
Relevant answer
Answer
If you want to conver the exel data to shapefile, you can import them to any GIS software and plot them as points then you can convert the points to lines. You can export them to kml if you want later.
  • asked a question related to Spreadsheets
Question
3 answers
I've only seen it for routing single-event hydrographs. I'm trying to model reservoir water levels and spillway flows using a 10-year inflow time-series, bathymetry (already calculated the stage-storage and stage-area relationships), ogee spillway rating equation, and prescribed minimum conservation flows through separate outlets. Using a basic spreadsheet water-balance for daily or hourly time-step, the head and flows over the spillway are unrealistically large. 15-minutes helped but is straining my computing resources. Can level-pool (storage indication) be applied here? Any special considerations/assumptions needed?
Relevant answer
Answer
Well, Supposing same condition of climate factors, you can use the application of Gumbel statistical method to predict the percentile success chance (%)of any variable used in water balance for future estimation.
thanks
  • asked a question related to Spreadsheets
Question
4 answers
I need a free program, to import the spectrophotometer reading to Microsoft Excel (2007 to 2013) under windows operating system (win7 to 10) .
our spectrophotometer had a RS232 port and we would to measure the SOD activity . So, we need a free software to transfer the O.D reading to excel in short specific time .
Relevant answer
Answer
Is this hypothetical 'free' software even verified to comply with 21 CFR part 11?  This software can cost you 'big time' in the end!
  • asked a question related to Spreadsheets
Question
1 answer
I'm getting stuck in particular once my ROIs are selected and telling it to count foci. Each cell is getting a spreadsheet with no headings telling me what each column represents. I'm not interested in doing a batch mode as I don't have that many pictures. A) can all cell data be put into one single sheet? B)what are my headings?
Thanks for any help.
betty
Relevant answer
Answer
hi Betty,
just found your comment. seems like your file output is not complete. please visit our project page on RG to get the new version of the Focinator 2-10. if you still need help just write me!
Best wishes 
Sebastian Oeck
  • asked a question related to Spreadsheets
Question
3 answers
We have disseminated and received feedback on an internal customer service survey.  No one in the unit is particularly skilled in statistics or data analysis.  The survey itself was developed in Survey Monkey using a 5 point Likert scale along with questions about demographic information.  Because Survey Monkey was new to staff, the survey was set up in a way that prevents certain comparisons and the aggregating of responses to a group of responses cannot be cleanly performed.
I have downloaded all the data into Excel and am seeking help with the following:
I have 6 categories of respondents with 13 to 536 individuals per category answering questions under 6 themes.  The themes have 2 – 6 questions each.
What I would like to do is obtain the average rating for each theme by the each respondent category.
As stated, no one here is statistically savvy so in case I’m not making sense, I will provide an example.  Our staff is divided into the following categories: administration, program 1, program 2, program 3… program 6.  I want to obtain the average rating given by a category (eg administration) for the theme (eg Agency teamwork which has six questions).  My spreadsheet is in Excel.
Any help you can provide will be greatly appreciated.
Relevant answer
Answer
Dear Cecelia,
I agree with Lall that Principal Component Analysis is a good choice for each of your themes that have at least 3 questions in them. I would recommend removing any questions with a score of less than 0.4 on the first component. Provided that you end up with at least 3 questions in a theme you can then save the regression model to create a single variable summarizing that theme. Please see my guide to scale reliability for more information. If you only have one or two questions left in a theme then you will need to use these individually in your analysis.
  • asked a question related to Spreadsheets
Question
4 answers
Does anyone know if I can calculate Vuong's test in SAS or if not, does
anyone have an Excel spreadsheet that can do the trick.
Vuong (1989) test is a likelihood ratio test for non-nested model selection.
Vuong test statistics allows both models to have explanatory power, but
provides direction concerning which of the two is closer to the true data
generating process. Simply put I want to use Vuong test to calculate a
Z-score (compare R-square for the two models) in order to choose between two
models.
For instance I would like to compare:
P = EARN + BVE to
P = ADJEARn + ADJBVE
where P is stock price, EARN is accounting earnings, BE = book value of
equity
and ADJ means adjusted.
I need this badly to do some final testing in my ph.d. dissertation
Any help is highly appreciated
Relevant answer
Answer
  • asked a question related to Spreadsheets
Question
3 answers
Hello!. Has someone an Excel spreadsheet operating under windows of the geothermometer and oxygen-barometer of Ghiorso and Evans (2008)?
Relevant answer
Hi Cynthia. 
In this webpage you can find what you are looking for: 
Ghiorso and Evans (2008) Fe-Ti oxide geothermobarometer
There is either Microsoft™ Excel for Windows XP (Excel 2002) or Excel 2003, running under Windows 98, ME, NT, XP, 2000, or Windows Server 2003 which calculates temperatures and oxygen fugacities from tabulated oxide compositions.
Good luck 
Regards..
Cesar Tarazona
  • asked a question related to Spreadsheets
Question
3 answers
The excel spread has many variables and formulas. I need assistance to implement an optimization model that minimise the difference between the observed and predicted evaporation over a given period of time e.g N days.The idea is to estimate two parameters using the model. i have challenges in implementing such a model. Your asistance will be greatly appreciated
Relevant answer
Answer
I am sorry, I don't understand the last explanation.  I will bow out of discussion and since I responded, you may want to restate your questions again and include some actual data that describes in more detail in what you are doing as an example in developing a multi-day optimization model, something new to me.
  • asked a question related to Spreadsheets
Question
4 answers
Hi,
I am Writing a thesis about predicting abnormal stock returns based on sentiment analysis of tweets.
More specifically we have a huge datasets of tweets, corresponding to a randomized sample of about 1% of all tweets during a year.
Now, we want to sort out the tweets mentioning the companies in the index we are looking at, which is EURO Stoxx 50.
We now want to sort or dataset for tweets containing any cash-tag ($) for our companies. For example AztraZeneca will be $AZN for their ticker symbol. So for this index we will filter for a list of any of 50 cashtags. How can we do this? Preferably in excel.
I enclosed a Picture of how our spreadsheet looks like, as well as a sample of the dataset.
KR
Benjamin
Relevant answer
Answer
I would say this is difficult to do in Excel and quite convoluted. You can do this simply using python by downloading your data as a csv. You can read your data in pandas, then just use regular expressions to look for these names. I can help you with this. Feel free to contact me by leaving a message on my site(arindampaul.me)
  • asked a question related to Spreadsheets
Question
8 answers
Hello,
I think an example would be the easiest way to be clear: if I were to go to the URL (http://www.ncbi.nlm.nih.gov/pubmed/26644394) and then want to have an excel spreadsheet - or I can go outside excel, though it is my comfort zone - auto populate with certain portions of the page, how would I do it/set it up? For instance, I might have a column for pubmed ID (26644394 for this paper, it is in the URL or also always re-listed in the bottom left hand side of the Pubmed page for each entry), another column for the full title of the paper, another with author names listed, one with year published... and maybe things that would require 'clicking' such as under the drop down 'Author information' section or maybe the full name of the journal (note that the URL gives the abbreviated name, so that might be complex/require matching the standard abbreviation the Pubmed URL lists to a database at a different URL and taking the full name from there).
The desire is that I could automate the process so I only specify the pubmed URL and the other entries automatically populate. Many programs do this for certain entries so I am sure in principle it is not difficult, but I don't know how to do it so that the specific information the user wants is easily generated - rather than a pre-set version from a program like Mendeley which might simultaneously generate more information fields than you need while at the same time not collecting certain information you'd like - once you set up the initial protocol.
Does anyone know how one might go about this - Excel or not, and requiring a software download or other work around, this would be really helpful to me and perhaps other users on this site. So thank you for your suggestions!
Please have a good day.
PS - if you can figure out how to also collect total number of citations a paper has, and the impact factor of the journal at the time the article in question was published, i.e. through a workaround via a separate source, that would amaze me and I would appreciate thoughts on that too! Thank you
Relevant answer
Answer
My http://bibliometri.wikidot.com/bibliometry-toolbox includes a number of programs able to convert tagged file formats to TAB-format, including PubMed's MEDLINE-format:
The page is In danish, but Google Translate makes it readable. Please contact me, if you have any questions or suggestions for improvements.
The most recent version is:
Program: MEDLINE to TAB - extracts and compile the following datafields:
PMID: PubMed-ID - searchable in SCOPUS og WoS (Advanced search)
DP: Date of Publication - for both e- og p-version, e.g.: 2015 Mar; 20141017
AU: Full author name(s) (if available: FAU)
TI: Article title
DE: MeSH-terms
DT: Document type
Source: Full journal name, volume, issue, pages and/or article number: JT VI[IP]: PG
ISSN: all ISSN's in record
AID: article-DOI
AD: Collects all (full) author names + all affiliations: Format: First author (full name) ¤¤¤ Affiliation 1 === Affiliation 2 (if present) /// Second author (full name) ¤¤¤ Affiliation 1 === Affiliation 2 etc.
AB: Abstract
TT: Article title in original language (if not american-english)
The program "SCOPUS PMID Search" and "WoS PMID Search" creates search profiles for any number of PMID's if you want to colllect citation data from these sources.
Please note, that the programs will crash if you try to overwrite your input-file
  • asked a question related to Spreadsheets
Question
4 answers
Examples can be molecular dynamics, chemical reactions, particle physics, astronomy, even weather patterns. It's the data in flat-form, e.g. spreadsheets that I need.Attached is an example for exoplanet data from a book and the clusters my algo identified. The data can be multi-dimensional. I will take care of that.Thank you in advance.
Relevant answer
Answer
Find some dataset from the following link for your start in clustering analysis. You could also check on R package for data sets.
  • asked a question related to Spreadsheets
Question
2 answers
Thanks in advance for your replies.
Relevant answer
Answer
Dmitry,
Thank you for your interest. Although, I have been able to convert it. MRTG is a graphical presentation of instantaneous data rate (in- & out-bound) across  router.
Regards,
Ayodeji
  • asked a question related to Spreadsheets
Question
5 answers
I work a lot with Landsat data, but am by no means an expert.  I suspect many researchers would find it useful to query the landsat scene database for reflectance values for an array of points without going through the steps of downloading tiles and then processing them individually in Arc.  Has anyone found (or written) code or an efficient workflow that does this?
Ideally, one should be able to use the solution to download selected band reflectance values (either coincident or averaged for a small area) into a spreadsheet with over 100 points from multiple time-points and for multiple tiles.  
Relevant answer
Answer
There appears to be several query languages against Landsat.  Try googling 'query language against Landsat data'. You might find one that will allow you to target your specific goal.  Again good luck!
  • asked a question related to Spreadsheets
Question
2 answers
This "program" or spreadsheet needs to be simple enough so that someone with limited training can enter their point count observations and generate a density estimate. The user will not need to know anything more than how to enter the data. The calculation of density will be entirely black box.
Relevant answer
Answer
Thanks. Tere was a problem using thjis program. Don't know how to contact Rua for an update or assistance with using it.
  • asked a question related to Spreadsheets
Question
6 answers
I have created a course module for final year agriculture undergrads "Cloud Computing and Ground Truth Data" to teach them how to use Python to capture and analyse data.  I am surprised by how often they resort to spreadsheets and GENSTAT for data crunching which are not appropriate.  I have stopped using MATLAB as every library I need is now open source in Python. Python is particularly useful in crunching large longitudinal data files of movement and rumen data. Could we set up a standardized library?
Relevant answer
Answer
To get closer to your goal to get standardized analysis code that researchers can share and improve, I would suggest to start a project at code sharing sites like github or sourceforce and invite other researchers in the field to use your code. Ease of access for everyone is essential.