Science topics: R Factors
Science topic

R Factors - Science topic

A class of plasmids that transfer antibiotic resistance from one bacterium to another by conjugation.
Questions related to R Factors
  • asked a question related to R Factors
Question
1 answer
Hi all,
I need to calculate rainfall erosivity for a basin in Thailand and due to data limitation, I only have daily rainfall data. I have been using the Arnoldus (1977) (as presented by Chen et al. 2023 - see attachment)to calculate annual rainfall erosivity. However, I am a little stumped on how to calculate monthly rainfall erosivity with this formula. I assume that I just need to remove the sum in the log part but am not 100% certain that is the right way to proceed.
Can anyone please let me know if my approach is correct and if not, how I should proceed in this case?
Many thanks
Relevant answer
Answer
Not really my field but it may not be as simple as that. The equation is for annual erosion. Suppose all rain fell in January only (month i) and none otherwise. yhen Pi =P
then sigma (over i) Pi 2 / P - 0.8188 would be P2/P-0.8188 =P-0.8188 and Annual erosion = 17.02 x 10 exp (1.5 x log (p-0.8188)). This is not the monthhly erosion in January because the effect of that rain would be felt over the whole year and this effect would be different in different localities/ climates etc.
This is an empirical equation calibrated for annual erosion not monthly erosion.
  • asked a question related to R Factors
Question
1 answer
Hello!
I just started to use GSASII and I am having trouble to find Rexp after refinement. I see on the tab covariance GOF (goodness of fit), wR (Rwp), but I cannot find Rexp. Can you help me?
Thank you!
Kind regards,
J.V.
Relevant answer
Answer
According to this slideshow:
GOF = (Rwp / Rexp) ^2
So, you can do the math and obtain Rexp.
  • asked a question related to R Factors
Question
3 answers
hi
i have calibrated a snow/glacier-fed mountainous watershed.
The calibration results are good with R2 and NSE >0.80 and p factor 0.77 and r factor 0.71, PBIAS 7.5
I have got the following values.
v__SFTMP.bsn                  -1.750000 v__SMTMP.bsn                  -5.850000 v__SMFMX.bsn                  1.100000 v__SMFMN.bsn                  2.100000 v__TIMP.bsn                   0.083333
Is it Ok to have SFTMP < SMTMP and similarly SMFMX < SMFMN for a watershed in the Northern Hemisphere?
Relevant answer
Answer
It really depends on how you wish to define "ok".
Regarding the model's functionality, yes, it does not matter very much: even if the calibration process suggested you some parameter values that are physically ridiculous, as long as each value is within its own "functionally acceptable" range, the model can still work and yield discharge result for you.
You can find the reference to this range in the SWAT-CUP interface, "Calibration inputs" >> "Absolute_SWAT_values.txt" (and some of the parameter values can even be outside the given range there).
On the other hand, if you are wondering whether the physical meaning of these calibrated parameters are accetpable, then it is a much more complicated question.
The quick answer to your question would be, no it is not ok.
A suggestion would be to specify the numerical relationship in the initial calibration setting so that this situation can be prevented from the beginning. Assuming you are using SWAT-CUP for calibration, when you add SFTMP and SMTMP in the "Par_inf.txt" of the interface, the third larger column is "Filter condition (optional)", and the last sub-column there is "Conditional filter". Select you desired parameter, and you will see a "..." under this sub-column. Click it and you will open the Conditonal Filter Editor, then include both SFTMP and SMTMP via the "New Parameter", then define their relationship on the right pannel. It is a very simple process, I trust you can understand how to use it once you see it. If not, you may ask me again.
By this way, I think the problem itself can be solved.
However, as I said, it is actually a more complicated question. So I have a longer explanation for you on this issue, you can keep reading if you are interested.
--------------------------------------
Before explaining any further, one essential point you need to understand is that an optimization process such as the one that SWAT-CUP use is highly statistics-based. In other words, the calibration process does NOT guarantee the physical meaning of any parameter at all during the calibration process. As a result, even when the NSE, P-bias, or r^2, are all very good after the calibration, it does not necessarily mean the numerical values of the calibrated parameters are physically true to the local reality in your target catchment.
If this notion is the definition of "ok", then theoretically you need to make sure the physical meaning of every calibrated parmater is reasonable and somewhat true to the local reality of your target catchment.
However, you may notice that this notion is rarely realized in most publsihed papers (even from highly ranked journals). This is because without the validation data (not the one to validate discharge result, but to validate the physical meaning of parameters), no one can really prove the numerical values of the calibrated parameters are physically wrong (although at the same time, you cannot prove them correct either). As an alternative solution, the commonly accepted way is to ensure two points:
1. The initial calibration range(s) is physically meaningful (but only in a general sense, not necessarily true to the local reality), so that after the calibration process, the calirated values will still be physically meaningful.
2. The calibrated parmater(s) shows acceptable sensitivity.
Now back to your initial question, if the term "ok" refers to whether the consequent result would be good enough for a thesis or a journal paper, then in most cases, it will be ok as long as you fulfilled these two points and show the evidence in your work.
Hope this helps.
  • asked a question related to R Factors
Question
11 answers
I am using RUSLE model for determining erosion in watersheds of Nepal. I want to know the level of erosion due to precipitation in monsoon. So can I use precipitation of only monsoon months (June- September) in the model instead of average annual precipitation ?
Relevant answer
Answer
who can support me on the revised R factor estimation in the RUSLE
  • asked a question related to R Factors
Question
1 answer
Hi I'm trying to extract the parameter related to quality control of PDB file from X-ray crystallography .
parameters such as R, R-free value are available in header file of PDB but I couldn't find
real space r-value and real space correlation coefficient.
Is there any other place that I can find these information or
any other public tool to calculate these parameter?
Thanks
Relevant answer
Answer
These values are available in wwPDB validation reports (e.g., https://files.rcsb.org/pub/pdb/validation_reports/dt/1dtj/1dtj_validation.cif.gz), a snippet is given below.
loop_
_pdbx_vrpt_model_instance_density.ordinal
_pdbx_vrpt_model_instance_density.instance_id
_pdbx_vrpt_model_instance_density.natoms_eds
_pdbx_vrpt_model_instance_density.RSRCC
_pdbx_vrpt_model_instance_density.RSR
_pdbx_vrpt_model_instance_density.RSRZ
1 1 8 0.873 0.146 0.431
2 2 5 0.911 0.081 -1.108
3 3 9 0.941 0.095 -0.832
4 4 8 0.978 0.089 -0.796
5 5 7 0.972 0.112 -0.236
...
  • asked a question related to R Factors
Question
5 answers
Dear RG members,
pl provide some information regarding how to separate a mixture of 4-nitrobiphenyl and 4-nitrobromo benzene as they both have same Rf in EA, Hexane, DCM, PE, diethylether, MEOH and a mixture of EA/Hex, Hex/DCM, PE/Hex, PE/EA, Diethylether/Hex, Diethylether/EA with various proportion. Literature reports the clean NMR of Biphenyls from Suzuki Coupling reaction. can anyone help me to separate these products. In TLC plate both 4-nitrobiphenyl and 4-nitrobromo benzene have single spot. but crude NMR consists Peaks of both reactant and product.
Relevant answer
Answer
Chen Jingwen thanks for your suggestions.
  • asked a question related to R Factors
Question
5 answers
Hello everyone, I am doing calibration of Flow for my watershed. My concern is, I am stuck with the value of NSE, R2 and PBIAS. only the p-factor and r-factor changes into decreasing value.
How to decrease the value of CN2 and ALPHA_BF? These parameters belongs to the most sensitive parameters in my model.
What do I need to check to verify these values?
Thank you!
Relevant answer
Answer
Michelle--unfortunately I'm not familiar with Swat Cup software; so I just don't know whether or not Swat Cup can update these two parameters--sorry I can't help you more with this; hopefully someone who works with Swat software will be able to help with your question.
  • asked a question related to R Factors
Question
7 answers
barplot in r software.
Relevant answer
Answer
You can use the following code to set up a pattern for each group. You will need both the ggplot2 and ggpattern packages.
ggplot(data, aes(group, x)) + geom_bar_pattern(stat = "identity", pattern_color = "white", pattern_fill = "black", aes(pattern = group))
This should give you the desired result.
Best wishes,
Francesco
  • asked a question related to R Factors
Question
3 answers
by the way, what is the acceptable vale for R-factors
Relevant answer
Answer
Dear Asim, for some additional information about acceptable R-factors in single-crystal X-ray structure determinations please have a look at the following closely related questions which have been asked earlier on RG:
What is the allowed R value of the good crystal?
(7 answers)
and
How to reduce the R-value of the crystal?
(4 answers)
Good luck with your work and best wishes!
  • asked a question related to R Factors
Question
6 answers
Is there any equation for R factor (for RUSLE), using Annual Precipitation data in Khulna, Bangladesh. Or is there any other method for determining the R equation using local data for each station? I have daily, monthly and annual precipitation data for my study area for each station.
  • asked a question related to R Factors
Question
3 answers
I required to extract rainfall erosivity factor (R) from gridded rainfall data ( TRMM, TMPA or IMERG). For that reason, I want to use statistical programming language R. Can anyone provide me the code for that? Thank you in advance.
  • asked a question related to R Factors
Question
3 answers
Relevant answer
Answer
Good question.
  • asked a question related to R Factors
Question
4 answers
Could you please help? I want to refine the BaFe11.7Mg0.3O19-d. I used the room temperature synchrotron x-ray data (l=1.196132). For Rietveld refinement, I used FullProf software. Accordingly, I got Bragg R-factor: 3.377, RF-factor: 2.261, Rp: 15.5 Rwp: 17.6 Rexp: 10.73 and Chi2: 2.69. I got one notification “Conv. not yet reached -> [Max] Shift(Bck_72_pat1)/(eps*Sigma)= -1.05 abs> 1” How can I solve this issue? Are these values acceptable for good fitting? Following is the fitting pattern.
Relevant answer
Answer
The negative value is a fitting mathematical artifact; by adopting negative occ the Rw or chi2 drops, there is no physical meaning in negative occupancy.
Sometimes, a specific atom's suggestion is made in a particular site with some occupancy describing substitutional or interstitial occurrence, and the fitting result gives zero or negative near-zero value. That means the suggestion is unacceptable or at least unprovable. However, with known atoms distribution in already symmetric described sites, the negative value has no physical meaning for a regular structure description.
Some software parameter descriptors are polynomial functions. Eventually, one of them may assume negative values. Check the software documentation description for that particular parameter descriptor, whatever it is. Always be sure the negative value is not a mathematical artifact.
Best regards,
WNM
  • asked a question related to R Factors
Question
9 answers
The whole data was collected on Likert scale. I am using Prof. Gaston's plspm package in R for SEM modeling. I think different age groups must have differences. But, I can't test differences of more than than two groups at structural level. My question is, if I divide the data into various age based subgroups and prepare SEM models separately for each subgroup. Is it meaningful? How to justify the models are significantly different? I was not able to perform ANOVA test to check the difference among models. What should I use? Please guide me Thanks
Relevant answer
Answer
Smart PLS has excellent capabilities for multigroup PLS-SEM analysis. You can for instance access to statistical testing for path differences between groups. This software is not open source but you can freely download and use it for a free 30 day trial (maybe it could help)
  • asked a question related to R Factors
Question
6 answers
I am using "PLSPM" package in R. I received a warning message of "Setting row names on a tibble is deprecated". I am not an expert in programming. I want to know what are the effects of this warning message on my model? and how to fix it ? if it can influence my model.
Relevant answer
Answer
Rownames are generally bad practice in tidy R programming. If a data.frame object is transformed into a tibble (tbl_df), rownames should be put into a separate variable, for example Participant_ID.
  • asked a question related to R Factors
Question
17 answers
What are the most valuable lessons you've learnt after using R?
Relevant answer
Answer
In my view, R is efficient for quick data analysis and visualization. It is also simple to expose results through an interactive web app with the package shiny.
However, I agree with Clément Poiret that Python is almost inevitable when you work in industry thanks to its versatility and the ability to respect processes of continuous integration. I've first learned coding with R at school, few years after I had to use Java and C++. This lead me to review and improve the way I coded with R. After that I've learned Python for OOP and statistical programming, this lead me to improve again my R skills. Now with few years of experience, I often switch between R and Python depending on the task. As I said firstly, I always use R and RStudio for data exploration (I found it powerful to deal with big datasets and parallel processing) and quick reports with Rmarkdown (HTML and TeX). I necessarily use Python when I need robustness for heavy project and OOP (e.g. to improve factorization and inheritance).
Finally, I would say that there ALWAYS be a rigorous way to code in R, but it is not mandatory and the language do authorize bad practice to obtain same results. It is not just about code performance, but also about the syntax of the functions or even simply about the code implementation. For statisticians working in teams who learned only R, I would recommend to use packages with documentation (e.g. roxygen2) and unit tests (e.g. testthat).
  • asked a question related to R Factors
Question
3 answers
How does the Bragg-R factor value effects the chi-square value?
Relevant answer
Answer
The complete answer to your question takes several considerations.
 
Nevertheless, there is no simple way to distinguish a reasonable good fit from one that is just a set of wrong based on R factors or other discrepancies.
 
A large number of Rietveld indices (R-factors) have been proposed, there is no way, virtually impossible to pick up one that can be used as an absolute measure of refinement quality. 
The reasons need to be clear; a good fit, not always means a low Bragg-R factor. It means a physically correct and acceptable description of the material. There are several ways to get relatively low Bragg-R factors completely meaningless; it is a widespread mistake.
Diffraction data are a set of intensities measured at a set of specific momentum transfer (Q) values, usually expressed as 2θ scanning.
Check the limitations and have a wild vision of the problem, so please read the reference below.
It is an excellent discussion on how good a fit is good enough.
Best regards,
WNM
  • asked a question related to R Factors
Question
1 answer
The structured area does not have a particular geometrical feature all over the surface e.g. cone, pillar, or holes. Moreover it is has the feature of micro capillary (valley of width 50 microns) with nano-features on the top of these micro-capillaries. Can we calculate roughness factor (r) from the atomic force microscopy(AFM) images?
Regards,
Nancy
Relevant answer
Answer
Dear Dr. Nancy Verma ,
I suggest you to have a look at the previous discussion, present on RG available at:
Best regards, Pierluigi Traverso
  • asked a question related to R Factors
Question
4 answers
i'm working on Universal soil Loss equation to estimate the soil loss occurred in south Sinai region using GIS tool, but i have problem in estimating the R-factor in USLE due to leak of data, so i suppose to estimate R-factor from the available annual rainfall data
Relevant answer
Answer
You should determine regresion equation for local conditions. You should calculate R factor for et least several stations according original methodology using high resolution rainfall data and then use correlation and regresion analyses with low resolution rainfall data (daily, monthly, annual) for determination of regresion relationship between R factor and low resolution rainfall data.
  • asked a question related to R Factors
Question
6 answers
for the study area of Manipur state, how to download TRMM data annually to compute soil erosivity factor (R) in GeoTIFF or shape-file format ?
Relevant answer
  • asked a question related to R Factors
Question
15 answers
I am doing SEM for a model consisting latent structure as
model <- ' PE =~ q1 + q2 + q3
EE =~ q4 + q5 + q6
FC =~ q7 + q8 + q9+q10 + q11 + q12
TR =~ q13 + q14 + q15 +q16 + q17
EC =~ q18 + q19 + q20+q21 + q22 + q23
BI =~ q24 + q25 + q26 + q27'
I am using lavaan package for analysis, I want to check AVE for each construct and also correlation of all construct for checking discriminant validity.
Relevant answer
Answer
Here is a function (condisc) I have programmed in R that examines the convergent and discriminant validity of a measurement model estimated using the cfa()-function of lavaan: https://github.com/mmoglu/condisc
  • asked a question related to R Factors
Question
2 answers
To calculate the value of Response Modification factor "R" of any structure, a nonlinear static analysis can be done and for that linear static design is usually carried out. During that stage, a vale of R (based on the standard followed) is given to calculate the base shear for design. Doesn't that predefined value affect the final result?
Relevant answer
Answer
Dear Priyana
In ATC-19 (ATC 1995a), the R-factor is a combined effect of over-strength R_Ω factor, ductility factor R_μ and redundancy factor R_R that influence the structure response during earthquakes.
  • asked a question related to R Factors
Question
2 answers
Factors influencing occupancy and detection on opossums via camera trapping study
statistical analysis undertaken in R studio via the unmarked package.
I have a site covariate as distance to the nearest trail and an observation covariate of number of humans detected on the cameras per day.
I feel that these would correlate, but they are both included in my final model
Is there a way of doing a correlation test (cor.test) on these covariates?
My methods of trying have not worked so far.
Relevant answer
Answer
I guess both are continuous or numerical variables. You can perform correlations easily after checking the data distributions.
  • asked a question related to R Factors
Question
8 answers
To identify exact factor of resistance of drugs
To take measure on drugs cost
Relevant answer
Answer
The development of drug resistance in bacteria is a natural process that can't be stopped. However, it can be slowed. Resistance is currently developing at an alarming rate because of inappropriate and unnecessary antibiotic use.
Inappropriate use in healthcare settings includes using antibiotics when they are not needed for treatment, prescribing the wrong type of antibiotic for treatment, and prescribing antibiotics for an inappropriate duration.
According to the CDC, an antibiotic prescription is inappropriate half the time. For instance, antibiotics do not resolve viral infections such as the common cold, influenza (flu), most bronchitis, most sore throats, and the majority of sinus infections. However, unnecessary antibiotic use for these viral infections is still widespread.
In food animals, antibiotics are sometimes added to livestock food and water to promote growth and prevent disease. More than half of antibiotics currently made are used to enhance livestock growth. This contributes to bacteria becoming resistant to drugs important for human health.
  • asked a question related to R Factors
Question
3 answers
The peak seismic drifts of steel MRFs are calculated by amplifying the elastic deformations due to the design lateral forces with a deflection amplification factor (DAF). The European and the Canadian codes define the DAF to be equal to the response modification factor R while the ASCE 7-10 defines the DAF to be equal to 0.6875R. Any thoughts about why there is this variance between the seismic codes.
Relevant answer
Answer
R combines both the strength and deflection through the energy/toughness. Cd represents the deflection only. I personally attributes using smaller values for Cd than R to stability issues. Allowing Cd values close to R may be dangerous since the overall stability of the structure and surrounding elements of the seismic lateral system may be jeopardize and perhaps to control the second order internal forces.
  • asked a question related to R Factors
Question
1 answer
Hi everyone, I am looking for advice on solving RNA structure. I collected one dataset which has a high resolution around 1.5 A in H3 space group.The RNA contains 23 nucleotides, I calculated the cell content, it suggested two strands in asymmetric unit. when I analyse the data by phenix Xtriage, it said that the data had translational NCS and may also have an NCS rotation axis parallel to a crystallographic axis. And it said that the data had a higher crystallographic symmetry (R32:H). So when I processed the data to R32 and recalculate the cell content, it suggests that 1 strand in asymmetric unit and still has translational NCS. I used a high sequence identity published structure as a search model to perform molecular replacement, it output a solution with high TFZ (26.8) and LLG, but when I open it in coot, the electrondensity doesn't fit well and did refinement, the R factors are very high, around 47-48%. So do you ever encounter such situation during solving the RNA structure? Look forward to your advice. Thank you.
Relevant answer
Answer
Emm I don’t know whether you are still working on it,but personally if your r factor is such high,it might just because you did not find the right solution on MR.
  • asked a question related to R Factors
Question
6 answers
Hello All,
Thank you so much for your intelligent responses to my public questions in the past and to that of others for which you have provided answers. I am trying to determine the erosivity from an hourly rainfall data. Usually, to determine erosivity from such a data (1 hr rains) I need information on intensity. To calculate intensity, do I need use rainfall amount/time. Or the amount of rainfall within that hr is its intensity per hr? What if it is a 30 min rainfall, with 10mm of rains. What is its intensity?
The major challenge is this, how do I determine the rainfall intensity for use in the erosivity equation when I have one hourly rainfall? I think most of the paper I read are really vague about this.
I hope I can get this clarity.
Relevant answer
Answer
Hello Amobichukwu,
I sugget you this paper to know the details about the erosivity index (intensity, rainfall energy,...)
Kinds regards
Mourad GUESRI
  • asked a question related to R Factors
Question
12 answers
Given the different quality metrics for protein models based on XRD and NMR data, I want to know your opinon.
In particular, how good is the agreement between NMR and XRD-derived structures? In case of disagreement, which method would your trust better (assuming both were reasonably conducted)?
When facing the solution of a structure de novo, how is the method chosen? How much does it cost to solve a structure by X-ray compared to NMR?
Thank you very much for your insights!
Relevant answer
Answer
The two methods of structure determination are based on completely different properties of proteins. An NMR structure is calculated from magnetic properties of several nuclei while an X-ray structure is derived from electron density of non hydrogen atoms.
Structures of both methods have a high confidence, the determination of secondary structure elements, their relation and loops playing role in catalytic activity is reassuring. The possibility of some catalytic reactions in the circumstances of measurement reinforces the confidence of 3D structures. Regions, weakly defined by NMR or possibly affected by crystal packing, are determined less confidently. These are usually longer surface loops, chain terminals or domain interconnecting loops that have flexible conformation in natural circumstances as well. Pockets with catalytic activity and the framework of secondary structures that is responsible to fix the pocket are usually stabile enough not to be affected by the changes of circumstances.
The nature of the two methods results in the fact that NMR structures are never so concrete as X-ray ones, they allow larger freedom for motions of loops and terminals. It is probable that this freedom is related to dynamism of these regions in solution, however the modelling character of molecular dynamic calculations reminds to carefully handle such comparison.
  • asked a question related to R Factors
Question
4 answers
Hello, I am pursuing the below research question:
How does species composition change within 64 plots in response to the addition of treatments both independently and interactively?
I am using Bray-Curtis distances for my analysis. I have chosen the PERMANOVA as my data is highly non-normal and also because I desire to look at overall community differences. Thus far, I am able to execute the PERMANOVA in r (using: comm.div<-adonis2(comm.BC~Shelter*Nutrients*Burn, data=community, permutations = 999, method="bray"). To my understanding and based on the output, I do get the individual and interaction effect significance.
I want to make sure that I correctly perform PERMDISP using "betadisper" somehow taking into account my factors and their interactions. Is it supposed to be performed before PERMANOVA much the same way we would first perform a Lavene's test? Additionally, what coding could I use to input my interaction terms in the "group" function of "betadisper?" They are currently being called from a data table in which each treatment is a column and the rows are one of two levels (y/n).
I have gotten a lot of confusion regarding how to interpret the results of the PERMDISP test, as well. What does its significance mean in regards to PERMANOVA? (I.e. I've read that you need to corroborate PERMANOVA results with differences in PERMDISP.)
Much thanks for any answers or insights you can provide.
Relevant answer
Answer
A significant Permanova means one of three things. 1) There is a difference in the location of the samples (i.e. the average community composition), 2) There is a difference in the dispersion of the samples (i.e. the variability in the community composition), or 3) There is a difference in both the location and the dispersion.
So, if you get a significant Permanova you'll want to distinguish between the three options. That of course is why you need to run the permdisp. If you get a non-significant Permdisp you can conclude the first option above is the correct one. If you get a significant Permdisp then it is either the second or third option (that is there is definitely a difference in dispersion and maybe a difference in location). There is no fool-proof way to distinguish between these two, but looking at an MDS plot of the data will hopefully help you do so (also see note 2 below).
A few notes.
1) Permanova is not as powerful as permdisp at detecting differences in dispersion so it's possible to get a non-significant Permanova and a significant permdisp. This would mean that you have a difference in dispersion only. That can be an important ecologically in itself.
2) Transforming your data (square-root, cubed root, log, presence-absence) can reduce your dispersion and is a potential way to help distinguish between dispersion and both dispersion and location. However you need to take the transformation into account when interpreting the results. A significant permanova on raw data and a significant permanova on presence-absence data are interpreted differently, but that is another discussion in itself.
  • asked a question related to R Factors
Question
3 answers
Respected sir,
I am Ph.D student studying at IIT Gandhinagar. I am currently exploring seismic analysis and design of liquid storage tanks. I am currently comparing all the codal provisions related to steel storage tanks. I reviewed certain codes (API, NZSEE, EC-8,IS 1893) and found that none of the codes consider soil structure interaction analysis for detemination of R factor, since for broad tanks I found that SSI increases base shear and base moment. So my question does consideration of SSI will create a significant influence on R valve.
Relevant answer
Answer
I agree with Georgios answer
  • asked a question related to R Factors
Question
5 answers
Hi,
Please let me know is this the right approach for the calibration of flow in SUFI2?
1. I should first check the performance of initial model with observed data :
i. Should I compare the simulated results given as "flow_out" in the "rch" file in SWATOUTPUT.mdb with the observed data?
ii. Should this be performed for the whole discharge period or only the period for which I intend to perform calibration?
iii. Should I compute the waterbalance before doing calibration using my first model and pleae let me know what else i should check apart from comparing the observed and simulated discharge in the first step.
2. Based upon the comparsion of simulated and observed flow I should do the regionalisation of parameters.
2. Sensitivity Analysis to find out the parameters most sentive for the watershed with regards to flow.
3. I should set the range for the parameters identified in step 3.
Can i first use the ranges provided as default in SWAT-CUP or I should set the ranges based on literature review (in case I find any). For this step I should run the simulations from 500-1000 times and check the values of Objective funcion and 95PPU and r-factor. If the ranges are satisfactory then those ranges set for the parametsr will be final otherwise i should repeat the model for another 500-1000 simulations by changing the ranges.
please correct me if my approach is wrong.
thanks
Relevant answer
Answer
Dear Saima,
See the steps and sample formats in the attachments.
Do you want to prepare observation series of data in to SWAT-CUP format?
  1. The model is said to be calibrated after adjusting the observed data in to appropriate SWAT-CUP format,
  2. Running either of the 5 SWAT-CUP ( SUFI-2 for this case) packages, and
  3. Fixing the optimum model parameters which results relatively higher model performance rating criteria of interest, NSE, R2, KGE, MNSE, bR2...
  4. Then you will choose the final SWAT-CUP run and that result is the calibrated model simulation result.
  • Attached find a .xlsx file with formats to convert observed data in to SWAT-CUP acceptable format. You can past your data in to the spreadsheet and run SWAT-CUP.
  • I have also attached a SWAT-CUP file for you to easily look at where you should make changes.
Fore more detail, refer this article:
I have attached the SWAT-CUP SUFI-2 formulation. For the rest of the calibration uncertainty analysis techniques, you may refer the following paper. As you may choose any of the available SWAT-CUP packages (DOI:10.3390/w9100782).
Regards,
Gebiaw
  • asked a question related to R Factors
Question
4 answers
I recently submitted a paper where I stated the fabrication of a single-phase material. Using the MD Jade software for the XRD analysis, I matched all the peaks with the PDF of that very respective material. I also performed Rietveld refinement and whole pattern fitting, though I did not submit those data initially. Now the reviewer wants a table with refinement results to prove goodness of fit and insert the crystallographic lattice, ions positions and other results. I went through the article R factors in Rietveld Refinement : How Good is Good enough. Any sort of guidance would be appreciated. Thank You.
Relevant answer
Answer
Sorry P. Sikder, I did not carefully read your description. My mistake, I apologize.
The peak positions only correlate to the mathematical abstraction of the crystal lattice. It defines all peak positions but does not say anything about the crystal structure, i.e. the actual phase. Only the intensities give you some kind of confirmation that the discovered phase is actually correct. If this is not correct as in your case, two possible explanations exist. A) your phase is incorrect, and B) there are some other factors affecting the detected intensities. The most probable are the grain size (too big), or texture (preferred orientation). The first problem can be easily solved by a new grinding and measuring. The second could be a bit more complicated but you ca move or tilt the sample during the measuring in order to find out whether the intensities change. If they do it is certainly B). If the intensities do not vary, your structure is wrong.
  • asked a question related to R Factors
Question
4 answers
Dear all!
Does anyone refined the crystal structure using the Huber G670 data (with Guinier Camera) and the Jana2006 software and powder sample? I have got not bad diffractograms, but cannot refine anything, except background, zero-shift and unit cell parameters (final diffractogram is given). If I introduce any profile parameter, the R-factors grow and the refinement stops...
The data XY txt file is given.
Structure parameters: R3C (161); a ~ 10.82; c ~ 38.04; angles: 90-90-120
Co K alpha1 irradiation
Thanks!
Relevant answer
Answer
Dear all,
may I ask you to give the answers in english please. So we all will be able  to participate in the discussion.
Many thanks in advance.
  • asked a question related to R Factors
Question
1 answer
what is the response reduction factor for composite structure as par IS 1893 ?
for such type of structure the value for R-factor is not given in IS 1893. what should i take the value of R-factor in analysis of composite structure?
Relevant answer
Answer
Take special case as 5 (five)
  • asked a question related to R Factors
Question
5 answers
Hi all, 
I have obtained a 1.5 A dataset of a protein-peptide complex from synchrotron and am building a model with Phaser MR and Molrep separately. Both programmes build the protein part perfectly but in Phaser MR the solution only gives part of the peptide density, while from Molrep I did see nearly the whole peptide. Because the model I used contained itself the peptide, I am worried that if the molrep solution is subject to model bias. Any idea how to make sure of this?
For more information, I refine the model in Coot and run Refmac. The statistics are:
Resolution limits = 22.164 1.550
Number of used reflections = 75160
Percentage observed = 97.5762
Percentage of free reflections = 0.0000
Overall R factor = 0.2547
Overall weighted R factor = 0.2710
Overall correlation coefficient = 0.9260
Cruickshanks DPI for coordinate error= 0.1068
Overall figure of merit = 0.8387
ML based su of positional parameters = 0.0582
ML based su of thermal parameters = 1.5322
Is there anything wrong with 'Percentage of free reflections = 0.0000' ?
Thanks.
Relevant answer
Answer
Hi Sam, 
additionally to what Stefan suggested, I would run the MR with a model lacking the peptide and then build it in from scratch. This way you get the most unbiased information for the ligand. Be aware though that map features for the peptide might be less good, initially. They should improve gradually with your model's quality and completeness, while you build/refine.
All the best
Christian
  • asked a question related to R Factors
Question
3 answers
in metal complexes, Give some ideas to reduce " R-factor " .
Relevant answer
Answer
It's unclear whether you are talking about the refractive index or the structure refinement R-factor. These are not related. Since you can't alter the refractive index of a given crystal without altering the composition, I think it's the latter. Assuming that you're using Shelx to refine your model, applying an extinction coefficient is sometimes necessary (EXTI). Also, if you include a lot of high angle, weak reflections, these tend to raise the R. One solution is to exclude them with a SHEL command. Low temperature data acquisition of a better crystal wil of course also help.